Science.gov

Sample records for dynamical likelihood method

  1. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  2. Measurement of the Top Quark Mass by Dynamical Likelihood Method using the Lepton + Jets Events with the Collider Detector at Fermilab

    SciTech Connect

    Kubo, Taichi

    2008-02-01

    We have measured the top quark mass with the dynamical likelihood method. The data corresponding to an integrated luminosity of 1.7fb-1 was collected in proton antiproton collisions at a center of mass energy of 1.96 TeV with the CDF detector at Fermilab Tevatron during the period March 2002-March 2007. We select t$\\bar{t}$ pair production candidates by requiring one high energy lepton and four jets, in which at least one of jets must be tagged as a b-jet. In order to reconstruct the top quark mass, we use the dynamical likelihood method based on maximum likelihood method where a likelihood is defined as the differential cross section multiplied by the transfer function from observed quantities to parton quantities, as a function of the top quark mass and the jet energy scale(JES). With this method, we measure the top quark mass to be 171.6 ± 2.0 (stat.+ JES) ± 1.3(syst.) = 171.6 ± 2.4 GeV/c2.

  3. Measurement of the Top Quark Mass by Dynamical Likelihood Method using the Lepton plus Jets Events in 1.96 Tev Proton-Antiproton Collisions

    SciTech Connect

    Yorita, Kohei

    2005-03-01

    We have measured the top quark mass with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top and anti-top pairs in pp collisions at a center of mass energy of 1.96 TeV. The data sample used in this paper was accumulated from March 2002 through August 2003 which corresponds to an integrated luminosity of 162 pb-1.

  4. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  5. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  6. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  7. Composite likelihood method for inferring local pedigrees

    PubMed Central

    Nielsen, Rasmus

    2017-01-01

    Pedigrees contain information about the genealogical relationships among individuals and are of fundamental importance in many areas of genetic studies. However, pedigrees are often unknown and must be inferred from genetic data. Despite the importance of pedigree inference, existing methods are limited to inferring only close relationships or analyzing a small number of individuals or loci. We present a simulated annealing method for estimating pedigrees in large samples of otherwise seemingly unrelated individuals using genome-wide SNP data. The method supports complex pedigree structures such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can infer distant relatives more accurately than existing methods. Furthermore, we illustrate the utility of the method on a sample of Greenlandic Inuit. PMID:28827797

  8. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  9. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  10. Measurement of the top quark mass with the dynamical likelihood method using lepton plus jets events with b-tags in pp collisions at {radical}(s)=1.96 TeV

    SciTech Connect

    Abulencia, A.; Budd, S.; Chu, P.H.; Ciobanu, C.I.; Errede, D.; Errede, S.; Gerberich, H.; Grundler, U.; Junk, T.R.; Kraus, J.; Liss, T.M.; Marino, C.; Pitts, K.; Rogers, E.; Taffard, A.; Veramendi, G.; Vickey, T.; Zhang, X.; Acosta, D.; Cruz, A.

    2006-05-01

    This paper describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/antitop (tt) pairs in pp collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. We use the tt candidates in the 'lepton+jets' decay channel, requiring at least one jet identified as a b quark by finding a displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 tt candidates observed in the data, with 9.2 events expected from background, we measure the top quark mass to be 173.2(+2.6/-2.4)(stat.){+-}3.2(syst.) GeV/c{sup 2}, or 173.2(+4.1/-4.0) GeV/c{sup 2}.

  11. Measurement of the top quark mass with the dynamical likelihood method using lepton plus jets events with b-tags in p anti-p collisions at s**(1/2) = 1.96-TeV

    SciTech Connect

    Abulencia, A.; Acosta, D.; Adelman, Jahred A.; Affolder, Anthony A.; Akimoto, T.; Albrow, M.G.; Ambrose, D.; Amerio, S.; Amidei, D.; Anastassov, A.; Anikeev, K.; /Taiwan, Inst. Phys. /Argonne /Barcelona, IFAE /Baylor U. /INFN, Bologna /Bologna U. /Brandeis U. /UC, Davis /UCLA /UC, San Diego /UC, Santa Barbara

    2005-12-01

    This report describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/anti-top (t{bar t}) pairs in p{bar p} collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. They use the t{bar t} candidates in the ''lepton+jets'' decay channel, requiring at least one jet identified as a b quark by finding an displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 t{bar t} candidates observed in the data, with 9.2 events expected from background, they measure the top quark mass to be 173.2{sub -2.4}{sup +2.6}(stat.) {+-} 3.2(syst.) GeV/c{sup 2}, or 173.2{sub -4.0}{sup +4.1} GeV/c{sup 2}.

  12. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used.

  13. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  14. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  15. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  16. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  17. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  18. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  19. Error detection for genetic data, using likelihood methods

    SciTech Connect

    Ehm, M.G.; Kimmel, M.; Cottingham, R.W. Jr.

    1996-01-01

    As genetic maps become denser, the effect of laboratory typing errors becomes more serious. We review a general method for detecting errors in pedigree genotyping data that is a variant of the likelihood-ratio test statistic. It pinpoints individuals and loci with relatively unlikely genotypes. Power and significance studies using Monte Carlo methods are shown by using simulated data with pedigree structures similar to the CEPH pedigrees and a larger experimental pedigree used in the study of idiopathic dilated cardiomyopathy (DCM). The studies show the index detects errors for small values of {theta} with high power and an acceptable false positive rate. The method was also used to check for errors in DCM laboratory pedigree data and to estimate the error rate in CEPH chromosome 6 data. The errors flagged by our method in the DCM pedigree were confirmed by the laboratory. The results are consistent with estimated false-positive and false-negative rates obtained using simulation. 21 refs., 5 figs., 2 tabs.

  20. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  1. Comparisons of likelihood and machine learning methods of individual classification

    USGS Publications Warehouse

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    “Assignment tests” are designed to determine population membership for individuals. One particular application based on a likelihood estimate (LE) was introduced by Paetkau et al. (1995; see also Vásquez-Domínguez et al. 2001) to assign an individual to the population of origin on the basis of multilocus genotype and expectations of observing this genotype in each potential source population. The LE approach can be implemented statistically in a Bayesian framework as a convenient way to evaluate hypotheses of plausible genealogical relationships (e.g., that an individual possesses an ancestor in another population) (Dawson and Belkhir 2001;Pritchard et al. 2000; Rannala and Mountain 1997). Other studies have evaluated the confidence of the assignment (Almudevar 2000) and characteristics of genotypic data (e.g., degree of population divergence, number of loci, number of individuals, number of alleles) that lead to greater population assignment (Bernatchez and Duchesne 2000; Cornuet et al. 1999; Haig et al. 1997; Shriver et al. 1997; Smouse and Chevillon 1998). Main statistical and conceptual differences between methods leading to the use of an assignment test are given in, for example,Cornuet et al. (1999) and Rosenberg et al. (2001). Howeve

  2. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    PubMed Central

    2012-01-01

    Background Predicting a system’s behavior based on a mathematical model is a primary task in Systems Biology. If the model parameters are estimated from experimental data, the parameter uncertainty has to be translated into confidence intervals for model predictions. For dynamic models of biochemical networks, the nonlinearity in combination with the large number of parameters hampers the calculation of prediction confidence intervals and renders classical approaches as hardly feasible. Results In this article reliable confidence intervals are calculated based on the prediction profile likelihood. Such prediction confidence intervals of the dynamic states can be utilized for a data-based observability analysis. The method is also applicable if there are non-identifiable parameters yielding to some insufficiently specified model predictions that can be interpreted as non-observability. Moreover, a validation profile likelihood is introduced that should be applied when noisy validation experiments are to be interpreted. Conclusions The presented methodology allows the propagation of uncertainty from experimental to model predictions. Although presented in the context of ordinary differential equations, the concept is general and also applicable to other types of models. Matlab code which can be used as a template to implement the method is provided at http://www.fdmold.uni-freiburg.de/∼ckreutz/PPL. PMID:22947028

  3. Likelihood based observability analysis and confidence intervals for predictions of dynamic models.

    PubMed

    Kreutz, Clemens; Raue, Andreas; Timmer, Jens

    2012-09-05

    Predicting a system's behavior based on a mathematical model is a primary task in Systems Biology. If the model parameters are estimated from experimental data, the parameter uncertainty has to be translated into confidence intervals for model predictions. For dynamic models of biochemical networks, the nonlinearity in combination with the large number of parameters hampers the calculation of prediction confidence intervals and renders classical approaches as hardly feasible. In this article reliable confidence intervals are calculated based on the prediction profile likelihood. Such prediction confidence intervals of the dynamic states can be utilized for a data-based observability analysis. The method is also applicable if there are non-identifiable parameters yielding to some insufficiently specified model predictions that can be interpreted as non-observability. Moreover, a validation profile likelihood is introduced that should be applied when noisy validation experiments are to be interpreted. The presented methodology allows the propagation of uncertainty from experimental to model predictions. Although presented in the context of ordinary differential equations, the concept is general and also applicable to other types of models. Matlab code which can be used as a template to implement the method is provided at http://www.fdmold.uni-freiburg.de/∼ckreutz/PPL.

  4. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    SciTech Connect

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  5. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  6. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  7. A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews

    PubMed Central

    Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao

    2014-01-01

    Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the non-convergence problem, which is non-trivial when the number of studies are relatively small, the computational simplicity and some robustness to model mis-specifications. Simulation studies show that the composite likelihood method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma. PMID:25512146

  8. An Empirical Likelihood Method for Semiparametric Linear Regression with Right Censored Data

    PubMed Central

    Fang, Kai-Tai; Li, Gang; Lu, Xuyang; Qin, Hong

    2013-01-01

    This paper develops a new empirical likelihood method for semiparametric linear regression with a completely unknown error distribution and right censored survival data. The method is based on the Buckley-James (1979) estimating equation. It inherits some appealing properties of the complete data empirical likelihood method. For example, it does not require variance estimation which is problematic for the Buckley-James estimator. We also extend our method to incorporate auxiliary information. We compare our method with the synthetic data empirical likelihood of Li and Wang (2003) using simulations. We also illustrate our method using Stanford heart transplantation data. PMID:23573169

  9. Comparative behaviour of the Dynamically Penalized Likelihood algorithm in inverse radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Llacer, Jorge; Solberg, Timothy D.; Promberger, Claus

    2001-10-01

    This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm; (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases, the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation.

  10. Maximum Log-Likelihood Method with Weighting Penalty for CRISM Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    He, L.; Arvidson, R. E.; O'Sullivan, J. A.; Politte, D. V.

    2017-06-01

    We develop the Maximum Log-likelihood Method with weighting penalties to denoise and reconstruct the CRISM data. Spatial and spectral weights determined by sampling degrees and noise levels are introduced to remove spatial and spectral artifacts.

  11. Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Imani, Mahdi; Braga-Neto, Ulisses M.

    2017-01-01

    Partially-observed Boolean dynamical systems (POBDS) are a general class of nonlinear models with application in estimation and control of Boolean processes based on noisy and incomplete measurements. The optimal minimum mean square error (MMSE) algorithms for POBDS state estimation, namely, the Boolean Kalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the case of large systems, due to computational and memory requirements. To address this, we propose approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method from sequential Monte-Carlo theory. These algorithms are used jointly with maximum-likelihood (ML) methods for simultaneous state and parameter estimation in POBDS models. In the presence of continuous parameters, ML estimation is performed using the expectation-maximization (EM) algorithm; we develop for this purpose a special smoother which reduces the computational complexity of the EM algorithm. The resulting particle-based adaptive filter is applied to a POBDS model of Boolean gene regulatory networks observed through noisy RNA-Seq time series data, and performance is assessed through a series of numerical experiments using the well-known cell cycle gene regulatory model.

  12. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  13. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    MODIFIED MAXIMUM LIKELIHOOD ESTIMATION METHOD FOR COMPLETELY SEPARATED AND QUASI-COMPLETELY SEPARATED DATA...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT...quasi-completely separated , the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias-reduction (BR) method

  14. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  15. Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods.

    PubMed

    Lele, Subhash R; Dennis, Brian; Lutscher, Frithjof

    2007-07-01

    We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise.

  16. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    NASA Astrophysics Data System (ADS)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  17. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  18. A Method of Estimating Item Characteristic Functions Using the Maximum Likelihood Estimate of Ability

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)

  19. A Maximum Likelihood Method for Latent Class Regression Involving a Censored Dependent Variable.

    ERIC Educational Resources Information Center

    Jedidi, Kamel; And Others

    1993-01-01

    A method is proposed to simultaneously estimate regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)

  20. A Maximum Likelihood Method for Latent Class Regression Involving a Censored Dependent Variable.

    ERIC Educational Resources Information Center

    Jedidi, Kamel; And Others

    1993-01-01

    A method is proposed to simultaneously estimate regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)

  1. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics.

    PubMed

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-Sung; Maki, Kara L; Ross, David S; Aquavella, James V; Rolland, Jannick P

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/- 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision.

  2. Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood

    NASA Technical Reports Server (NTRS)

    Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.

    2016-01-01

    Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.

  3. Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood

    USGS Publications Warehouse

    Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.

    2016-01-01

    Sea-level rise (SLR) poses a range of threats to natural and built environments1, 2, making assessments of SLR-induced hazards essential for informed decision making3. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30 × 30 m resolution predictions for more than 38,000 km2 of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.

  4. Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood

    NASA Technical Reports Server (NTRS)

    Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.

    2016-01-01

    Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.

  5. PhyPA: Phylogenetic method with pairwise sequence alignment outperforms likelihood methods in phylogenetics involving highly diverged sequences.

    PubMed

    Xia, Xuhua

    2016-09-01

    While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing.

  6. Estimation of bias errors in measured airplane responses using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  7. Likelihood methods for regression models with expensive variables missing by design.

    PubMed

    Zhao, Yang; Lawless, Jerald F; McLeish, Donald L

    2009-02-01

    In some applications involving regression the values of certain variables are missing by design for some individuals. For example, in two-stage studies (Zhao and Lipsitz, 1992), data on "cheaper" variables are collected on a random sample of individuals in stage I, and then "expensive" variables are measured for a subsample of these in stage II. So the "expensive" variables are missing by design at stage I. Both estimating function and likelihood methods have been proposed for cases where either covariates or responses are missing. We extend the semiparametric maximum likelihood (SPML) method for missing covariate problems (e.g. Chen, 2004; Ibrahim et al., 2005; Zhang and Rockette, 2005, 2007) to deal with more general cases where covariates and/or responses are missing by design, and show that profile likelihood ratio tests and interval estimation are easily implemented. Simulation studies are provided to examine the performance of the likelihood methods and to compare their efficiencies with estimating function methods for problems involving (a) a missing covariate and (b) a missing response variable. We illustrate the ease of implementation of SPML and demonstrate its high efficiency.

  8. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  9. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  10. A likelihood reformulation method in non-normal random effects models.

    PubMed

    Liu, Lei; Yu, Zhangsheng

    2008-07-20

    In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method.

  11. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    PubMed

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  12. Extraction of the Michel parameters from muon decay by a maximum likelihood method

    SciTech Connect

    McKee, R.J.

    1980-01-01

    A maximum likelihood method was developed for extracting the Michel parameters, which describe the decay spectrum of stopped polarized muons. The method was applied successfully in a Monte Carlo simulation of apparatus. A complete error matrix for the set of Michel parameters was also obtained. In an experiment of 10/sup 8/ muon decays the statistical errors can be reduced by more than an order of magnitude over the currently accepted limits on the Michel parameters. 5 tables.

  13. Phase Noise Investigation of Maximum Likelihood Estimation Method for Airborne Multibaseline SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Magnard, C.; Small, D.; Meier, E.

    2015-03-01

    The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.

  14. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  15. Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.

    PubMed

    Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane

    2010-01-01

    An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.

  16. Simple imputation methods versus direct likelihood analysis for missing item scores in multilevel educational data.

    PubMed

    Kadengye, Damazo T; Cools, Wilfried; Ceulemans, Eva; Van den Noortgate, Wim

    2012-06-01

    Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.

  17. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    PubMed

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  18. A calibration method of self-referencing interferometry based on maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei; Guo, Guangrao

    2015-05-01

    Self-referencing interferometry has been widely used in wavefront sensing. However, currently the results of wavefront measurement include two parts, one is the real phase information of wavefront under test and the other is the system error in self-referencing interferometer. In this paper, a method based on maximum likelihood estimation is presented to calibrate the system error in self-referencing interferometer. Firstly, at least three phase difference distributions are obtained by three position measurements of the tested component: one basic position, one rotation and one lateral translation. Then, combining the three phase difference data and using the maximum likelihood method to create a maximum likelihood function, reconstructing the wavefront under test and the system errors by least square estimation and Zernike polynomials. The simulation results show that the proposed method can deal with the issue of calibration of a self-referencing interferometer. The method can be used to reduce the effect of system errors on extracting and reconstructing the wavefront under test, and improve the measurement accuracy of the self-referencing interferometer.

  19. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  20. Retrospective Likelihood Based Methods for Analyzing Case-Cohort Genetic Association Studies

    PubMed Central

    Shen, Yuanyuan; Cai, Tianxi; Chen, Yu; Yang, Ying; Chen, Jinbo

    2016-01-01

    Summary The case cohort (CCH) design is a cost effective design for assessing genetic susceptibility with time-to-event data especially when the event rate is low. In this work, we propose a powerful pseudo score test for assessing the association between a single nucleotide polymorphism (SNP) and the event time under the CCH design. The pseudo score is derived from a pseudo likelihood which is an estimated retrospective likelihood that treats the SNP genotype as the dependent variable and time-to-event outcome and other covariates as independent variables. It exploits the fact that the genetic variable is often distributed independent of covariates or only related to a low-dimensional subset. Estimates of hazard ratio parameters for association can be obtained by maximizing the pseudo likelihood. A unique advantage of our method is that it allows the censoring distribution to depend on covariates that are only measured for the CCH sample while not requiring the knowledge of follow up or covariate information on subjects not selected into the CCH sample. In addition to these flexibilities, the proposed method has high relative efficiency compared with commonly used alternative approaches. We study large sample properties of this method and assess its finite sample performance using both simulated and real data examples. PMID:26177343

  1. Efficient Simulation and Likelihood Methods for Non-Neutral Multi-Allele Models

    PubMed Central

    Joyce, Paul; Genz, Alan

    2012-01-01

    Abstract Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a, 1994b, 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 109 rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection. PMID:22697240

  2. Method and apparatus for implementing a traceback maximum-likelihood decoder in a hypercube network

    NASA Technical Reports Server (NTRS)

    Pollara-Bozzola, Fabrizio (Inventor)

    1989-01-01

    A method and a structure to implement maximum-likelihood decoding of convolutional codes on a network of microprocessors interconnected as an n-dimensional cube (hypercube). By proper reordering of states in the decoder, only communication between adjacent processors is required. Communication time is limited to that required for communication only of the accumulated metrics and not the survivor parameters of a Viterbi decoding algorithm. The survivor parameters are stored at a local processor's memory and a trace-back method is employed to ascertain the decoding result. Faster and more efficient operation is enabled, and decoding of large constraint length codes is feasible using standard VLSI technology.

  3. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  4. A dynamic growth model of Dunaliella salina: parameter identification and profile likelihood analysis.

    PubMed

    Fachet, Melanie; Flassig, Robert J; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2014-12-01

    In this work, a photoautotrophic growth model incorporating light and nutrient effects on growth and pigmentation of Dunaliella salina was formulated. The model equations were taken from literature and modified according to the experimental setup with special emphasis on model reduction. The proposed model has been evaluated with experimental data of D. salina cultivated in a flat-plate photobioreactor under stressed and non-stressed conditions. Simulation results show that the model can represent the experimental data accurately. The identifiability of the model parameters was studied using the profile likelihood method. This analysis revealed that three model parameters are practically non-identifiable. However, some of these non-identifiabilities can be resolved by model reduction and additional measurements. As a conclusion, our results suggest that the proposed model equations result in a predictive growth model for D. salina.

  5. FPGA acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods

    PubMed Central

    2010-01-01

    Background Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. Results We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10× speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Conclusions Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs) [1]. PMID:20385005

  6. Estimation of shape β parameter in Kumaraswamy distribution using Maximum Likelihood and Bayes method

    NASA Astrophysics Data System (ADS)

    Simbolon, H. G.; Fithriani, I.; Nurrohmah, S.

    2017-07-01

    This paper discusses the Maximum Likelihood (ML) and Bayes method for estimating the shape β parameter in Kumaraswamy distribution. Both methods will be compared according to Mean Square Error (MSE) obtained from each estimator. In the Bayes method, two Loss functions will be used, i.e., the Square Error Loss Function (SELF) and Precautionary Loss Function (PLF). Then, the Posterior Risk obtained from both loss functions will be compared. The comparison will be applied to hydrological data as a recommendation for the best method of representing the data. Hydrological data used in this study is a water storage in Shasta Reservoir, obtained from the California Data Exchange Center. By using the Mathematica Software and the formulas from both methods one obtains a statistic which can nicely describe the data and also predict the next observation of a reservoir in a certain time.

  7. Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models

    PubMed Central

    Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark

    2015-01-01

    This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the

  8. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  9. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  10. Multi-component spectral analysis of extended sources with a likelihood method

    NASA Astrophysics Data System (ADS)

    Naumann, Christopher Lindsay; Jacholkowska, Agnieszka

    2012-12-01

    The spectral and morphological analysis for gamma-ray sources with multiple emission components remains a major challenge for Cherenkov telescopes due to background emission from diffuse gamma rays. Current methods of background suppression, based on the bin-by-bin subtraction of OFF-source data do not allow an analysis of the various background components. As an alternative, we present an approach based on an event-by-event likelihood fit of ON-source data, using a combined spectral model for the source emission as well as the gamma-like background obtained from fits of the OFF-source data. Multiple emission components are separated by successive fits in different energy regimes and spectral variation inside the extended source is derived. The performance of this approach is evaluated by toy Monte-Carlo studies. For the application to real data, two well-studied H.E.S.S. sources are re-examined: the extragalactic point-source PKS 2155-304 and the extended pulsar wind nebula HESS J1825-137. For the latter, radial variation of the emission spectral index was evaluated with the likelihood method, confirming earlier findings by the H.E.S.S. collaboration [1].

  11. A semiparametric likelihood-based method for regression analysis of mixed panel-count data.

    PubMed

    Zhu, Liang; Zhang, Ying; Li, Yimei; Sun, Jianguo; Robison, Leslie L

    2017-09-15

    Panel-count data arise when each study subject is observed only at discrete time points in a recurrent event study, and only the numbers of the event of interest between observation time points are recorded (Sun and Zhao, 2013). However, sometimes the exact number of events between some observation times is unknown and what we know is only whether the event of interest has occurred. In this article, we will refer this type of data to as mixed panel-count data and propose a likelihood-based semiparametric regression method for their analysis by using the nonhomogeneous Poisson process assumption. However, we establish the asymptotic properties of the resulting estimator by employing the empirical process theory and without using the Poisson assumption. Also, we conduct an extensive simulation study, which suggests that the proposed method works well in practice. Finally, the method is applied to a Childhood Cancer Survivor Study that motivated this study. © 2017, The International Biometric Society.

  12. Equivalence between modularity optimization and maximum likelihood methods for community detection

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.

    2016-11-01

    We demonstrate an equivalence between two widely used methods of community detection in networks, the method of modularity maximization and the method of maximum likelihood applied to the degree-corrected stochastic block model. Specifically, we show an exact equivalence between maximization of the generalized modularity that includes a resolution parameter and the special case of the block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.

  13. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  14. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  15. An efficient frequency recognition method based on likelihood ratio test for SSVEP-based BCI.

    PubMed

    Zhang, Yangsong; Dong, Li; Zhang, Rui; Yao, Dezhong; Zhang, Yu; Xu, Peng

    2014-01-01

    An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR). To address this aspect, for the first time, likelihood ratio test (LRT) was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA-) based method and the least absolute shrinkage and selection operator- (LASSO-) based method. The recognition accuracy and information transfer rate (ITR) obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI.

  16. Determination of instrumentation errors from measured data using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Klein, V.

    1980-01-01

    The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.

  17. Comparative analysis of the performance of laser Doppler systems using maximum likelihood and phase increment methods

    NASA Astrophysics Data System (ADS)

    Sobolev, V. S.; Zhuravel', F. A.; Kashcheeva, G. A.

    2016-11-01

    This paper presents a comparative analysis of the errors of two alternative methods of estimating the central frequency of signals of laser Doppler systems, one of which is based on the maximum likelihood criterion and the other on the so-called pulse-pair technique. Using computer simulation, the standard deviations of the Doppler signal frequency from its true values are determined for both methods and plots of the ratios of these deviations as a measure of the accuracy gain of one of them are constructed. The results can be used by developers of appropriate systems to choose an optimal algorithm of signal processing based on a compromise between the accuracy and speed of the systems as well as the labor intensity of calculations.

  18. Maximum Likelihood, Profile Likelihood, and Penalized Likelihood: A Primer

    PubMed Central

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander

    2014-01-01

    The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method. PMID:24173548

  19. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    PubMed

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2016-04-26

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions.

  20. Log-Likelihood Method of Reducing Noise in CRISM Along-Track Oversampled Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Kreisch, C.; Arvidson, R. E.; O'Sullivan, J. A.; Fraeman, A. A.

    2014-12-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) began taking Along-Track Oversampled (ATO) observations in 2010 to obtain super-resolution hyperspectral images, with spatial resolutions in the along-track direction of as small as ~5 m/pixel. We present a new method to both process and reduce noise in the ATOs. We developed a log-likelihood algorithm to determine the most likely estimate of the actual Martian surface given the CRISM spectral radiance measurements and spatial and spectral transfer functions (TF). We assume the spatial TF is given by a 2D Gaussian and use the asymmetric Gaussian spectral TF given in the CRISM documentation. The log-likelihood method reduces Poisson noise in the spectrum for a given hyperspectral pixel, allowing for identification of subtle spectral absorptions otherwise lost in noise. We assume that the data are Poisson distributed and that the mean μ is given by a blurred version of the actual scene c. The CRISM ATO is this blurred version. It is more efficient to maximize the log of a function rather than the function itself, so we compute the image c that maximizes the log-likelihood function for the data. We begin with an initial guess for the projected, estimated scene c and have the freedom to choose any reasonable output pixel size. To forward project, we convolve the spatial TF with c and then convolve the spectral TF with the output. The result is a simulation of the measured ATO. We divide the CRISM ATO by this simulated scene to obtain the error, and then we backproject the error into dimensions of c by applying transposes of the TFs. Finally, we update our guess for the actual scene by multiplying c by the backprojected error. We iterate until convergence. We focus our analysis on recent ATOs of Gale Crater to illustrate the advantages of the method. We will also present a combined Poisson and Gaussian noise iterative approach with regularization and results for reducing noise in CRISM data.

  1. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience.

  2. A Maximum Likelihood Method for Reconstruction of the Evolution of Eukaryotic Gene Structure

    PubMed Central

    Carmel, Liran; Rogozin, Igor B.; Wolf, Yuri I.; Koonin, Eugene V.

    2012-01-01

    Spliceosomal introns are one of the principal distinctive features of eukaryotes. Nevertheless, different large-scale studies disagree about even the most basic features of their evolution. In order to come up with a more reliable reconstruction of intron evolution, we developed a model that is far more comprehensive than previous ones. This model is rich in parameters, and estimating them accurately is infeasible by straightforward likelihood maximization. Thus, we have developed an expectation-maximization algorithm that allows for efficient maximization. Here, we outline the model and describe the expectation-maximization algorithm in detail. Since the method works with intron presence–absence maps, it is expected to be instrumental for the analysis of the evolution of other binary characters as well. PMID:19381540

  3. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd.

  4. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    NASA Astrophysics Data System (ADS)

    Song, N.; He, B.; Wahl, R. L.; Frey, E. C.

    2011-09-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  5. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections.

    PubMed

    Song, N; He, B; Wahl, R L; Frey, E C

    2011-09-07

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  6. Analyzing pathogen suppressiveness in bioassays with natural soils using integrative maximum likelihood methods in R.

    PubMed

    Rall, Björn C; Latz, Ellen

    2016-01-01

    The potential of soils to naturally suppress inherent plant pathogens is an important ecosystem function. Usually, pathogen infection assays are used for estimating the suppressive potential of soils. In natural soils, however, co-occurring pathogens might simultaneously infect plants complicating the estimation of a focal pathogen's infection rate (initial slope of the infection-curve) as a measure of soil suppressiveness. Here, we present a method in R correcting for these unwanted effects by developing a two pathogen mono-molecular infection model. We fit the two pathogen mono-molecular infection model to data by using an integrative approach combining a numerical simulation of the model with an iterative maximum likelihood fit. We show that in presence of co-occurring pathogens using uncorrected data leads to a critical under- or overestimation of soil suppressiveness measures. In contrast, our new approach enables to precisely estimate soil suppressiveness measures such as plant infection rate and plant resistance time. Our method allows a correction of measured infection parameters that is necessary in case different pathogens are present. Moreover, our model can be (1) adapted to use other models such as the logistic or the Gompertz model; and (2) it could be extended by a facilitation parameter if infections in plants increase the susceptibility to new infections. We propose our method to be particularly useful for exploring soil suppressiveness of natural soils from different sites (e.g., in biodiversity experiments).

  7. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    PubMed Central

    Benevides, Leandro de Jesus; de Carvalho, Daniel Santana; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-01-01

    Abstract Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27560837

  8. Analyzing pathogen suppressiveness in bioassays with natural soils using integrative maximum likelihood methods in R

    PubMed Central

    Latz, Ellen

    2016-01-01

    The potential of soils to naturally suppress inherent plant pathogens is an important ecosystem function. Usually, pathogen infection assays are used for estimating the suppressive potential of soils. In natural soils, however, co-occurring pathogens might simultaneously infect plants complicating the estimation of a focal pathogen’s infection rate (initial slope of the infection-curve) as a measure of soil suppressiveness. Here, we present a method in R correcting for these unwanted effects by developing a two pathogen mono-molecular infection model. We fit the two pathogen mono-molecular infection model to data by using an integrative approach combining a numerical simulation of the model with an iterative maximum likelihood fit. We show that in presence of co-occurring pathogens using uncorrected data leads to a critical under- or overestimation of soil suppressiveness measures. In contrast, our new approach enables to precisely estimate soil suppressiveness measures such as plant infection rate and plant resistance time. Our method allows a correction of measured infection parameters that is necessary in case different pathogens are present. Moreover, our model can be (1) adapted to use other models such as the logistic or the Gompertz model; and (2) it could be extended by a facilitation parameter if infections in plants increase the susceptibility to new infections. We propose our method to be particularly useful for exploring soil suppressiveness of natural soils from different sites (e.g., in biodiversity experiments). PMID:27833800

  9. Two-locus models of disease: Comparison of likelihood and nonparametric linkage methods

    SciTech Connect

    Goldin, L.R. ); Weeks, D.E. )

    1993-10-01

    The power to detect linkage for likelihood and nonparametric (Haseman-Elston, affected-sib-pair, and affected-pedigree-member) methods is compared for the case of a common, dichotomous trait resulting from the segregation of two loci. Pedigree data for several two-locus epistatic and heterogeneity models have been simulated, with one of the loci linked to a marker locus. Replicate samples of 20 three-generation pedigrees (16 individuals/pedigree) were simulated and then ascertained for having at least 6 affected individuals. The power of linkage detection calculated under the correct two-locus model is only slightly higher than that under a single locus model with reduced penetrance. As expected, the nonparametric linkage methods have somewhat lower power than does the lod-score method, the difference depending on the mode of transmission of the linked locus. Thus, for many pedigree linkage studies, the lod-score method will have the best power. However, this conclusion depends on how many times the lod score will be calculated for a given marker. The Haseman-Elston method would likely be preferable to calculating lod scores under a large number of genetic models (i.e., varying both the mode of transmission and the penetrances), since such an analysis requires an increase in the critical value of the lod criterion. The power of the affected-pedigree-member method is lower than the other methods, which can be shown to be largely due to the fact that marker genotypes for unaffected individuals are not used. 31 refs., 1 fig., 5 tabs.

  10. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.

  11. Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods

    NASA Astrophysics Data System (ADS)

    Marchant, Ben; Mackay, Jonathan; Bloomfield, John

    2016-09-01

    Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the

  12. A maximum-likelihood multi-resolution weak lensing mass reconstruction method

    NASA Astrophysics Data System (ADS)

    Khiabanian, Hossein

    Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.

  13. Multi-component study of extended sources with a likelihood method

    NASA Astrophysics Data System (ADS)

    Naumann, C. L.; Jacholkowska, A.

    2013-09-01

    The spectral and morphological analysis for gamma-ray sources with multiple emission components remains a major challenge for Cherenkov telescopes due to background emission from diffuse gamma rays. Current methods of background suppression, usually based on the bin-by-bin subtraction of OFF-source data do not allow an analysis of the various background components. As an alternative, we present an approach based on an event-by-event likelihood fit of ON-source data, using a combined spectral model for the source emission as well as for the gamma-like background obtained from fits of the OFF-source data. Multiple emission components are separated by successive fits in different energy regimes and spectral variations inside the extended source is derived. The performance of this approach is evaluated with toy Monte-Carlo studies. For the application to real data, two well-studied H.E.S.S. sources are re-examined: the extragalactic point-source PKS 2155-304 and the extended pulsar wind nebula HESS J1825-137. For the latter, radial variation of the emission spectral index was evaluated, confirming earlier findings by the H.E.S.S. collaboration.

  14. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    PubMed Central

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  15. A Composite-Likelihood Method for Detecting Incomplete Selective Sweep from Population Genomic Data.

    PubMed

    Vy, Ha My T; Kim, Yuseob

    2015-06-01

    Adaptive evolution occurs as beneficial mutations arise and then increase in frequency by positive natural selection. How, when, and where in the genome such evolutionary events occur is a fundamental question in evolutionary biology. It is possible to detect ongoing positive selection or an incomplete selective sweep in species with sexual reproduction because, when a beneficial mutation is on the way to fixation, homologous chromosomes in the population are divided into two groups: one carrying the beneficial allele with very low polymorphism at nearby linked loci and the other carrying the ancestral allele with a normal pattern of sequence variation. Previous studies developed long-range haplotype tests to capture this difference between two groups as the signal of an incomplete selective sweep. In this study, we propose a composite-likelihood-ratio (CLR) test for detecting incomplete selective sweeps based on the joint sampling probabilities for allele frequencies of two groups as a function of strength of selection and recombination rate. Tested against simulated data, this method yielded statistical power and accuracy in parameter estimation that are higher than the iHS test and comparable to the more recently developed nSL test. This procedure was also applied to African Drosophila melanogaster population genomic data to detect candidate genes under ongoing positive selection. Upon visual inspection of sequence polymorphism, candidates detected by our CLR method exhibited clear haplotype structures predicted under incomplete selective sweeps. Our results suggest that different methods capture different aspects of genetic information regarding incomplete sweeps and thus are partially complementary to each other.

  16. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  17. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    PubMed

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  18. From Dynamical Processes to Likelihood Functions, An Application to Internet Surveillance Data for Influenza Like Illnesses

    NASA Astrophysics Data System (ADS)

    Stollenwerk, Nico

    2009-09-01

    Basic stochastic processes, like the SIS and SIR epidemics, are used to describe data from an internet based surveillance system, the InfluenzaNet. Via generating functions, in some simplifying situations there can be analytic expressions derived for the probability. From this likelihood functions for parameter estimation are constructed. This is a nice application in which partial differential equations appear in epidemiological applications without invoking any explicitly spatial aspect. All steps can eventually be bridged by numeric simulations in case of analytical difficulties [1, 2].

  19. Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method

    NASA Astrophysics Data System (ADS)

    Bayrak, Özlem Türker; Akkaya, Aysen D.

    2010-02-01

    We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.

  20. Calibrating floor field cellular automaton models for pedestrian dynamics by using likelihood function optimization

    NASA Astrophysics Data System (ADS)

    Lovreglio, Ruggiero; Ronchi, Enrico; Nilsson, Daniel

    2015-11-01

    The formulation of pedestrian floor field cellular automaton models is generally based on hypothetical assumptions to represent reality. This paper proposes a novel methodology to calibrate these models using experimental trajectories. The methodology is based on likelihood function optimization and allows verifying whether the parameters defining a model statistically affect pedestrian navigation. Moreover, it allows comparing different model specifications or the parameters of the same model estimated using different data collection techniques, e.g. virtual reality experiment, real data, etc. The methodology is here implemented using navigation data collected in a Virtual Reality tunnel evacuation experiment including 96 participants. A trajectory dataset in the proximity of an emergency exit is used to test and compare different metrics, i.e. Euclidean and modified Euclidean distance, for the static floor field. In the present case study, modified Euclidean metrics provide better fitting with the data. A new formulation using random parameters for pedestrian cellular automaton models is also defined and tested.

  1. HIV AND POPULATION DYNAMICS: A GENERAL MODEL AND MAXIMUM-LIKELIHOOD STANDARDS FOR EAST AFRICA*

    PubMed Central

    HEUVELINE, PATRICK

    2014-01-01

    In high-prevalence populations, the HIV epidemic undermines the validity of past empirical models and related demographic techniques. A parsimonious model of HIV and population dynamics is presented here and fit to 46,000 observations, gathered from 11 East African populations. The fitted model simulates HIV and population dynamics with standard demographic inputs and only two additional parameters for the onset and scale of the epidemic. The underestimation of the general prevalence of HIV in samples of pregnant women and the fertility impact of HIV are examples of the dynamic interactions that demographic models must reproduce and are shown here to increase over time even with constant prevalence levels. As a result, the impact of HIV on population growth appears to have been underestimated by current population projections that ignore this dynamic. PMID:12846130

  2. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  3. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  4. A novel method based on maximum likelihood estimation for the construction of seismic fragility curves using numerical simulations

    NASA Astrophysics Data System (ADS)

    Dang, Cong-Thuat; Le, Thien-Phu; Ray, Pascal

    2017-10-01

    Seismic fragility curves presenting some probability of failure or of a damage state exceedance versus seismic intensity can be established by engineering judgment, empirical or numerical approaches. This paper focuses on the latter issue. In recent studies, three popular methods based on numerical simulations, comprising scaled seismic intensity, maximum likelihood estimation and probabilistic seismic demand/capacity models, have been studied and compared. The results obtained show that the maximum likelihood estimation (MLE) method is in general better than other ones. However, previous publications also indicated the dependence of the MLE method on the ground excitation input. The objective of this paper is thus to propose a novel method improving the existing MLE one. Improvements are based on probabilistic ground motion information, which is taken into account in the proposed procedure. The validity of this new approach is verified by analytical tests and numerical examples.

  5. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    PubMed

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  6. A likelihood method to cross-calibrate air-shower detectors

    NASA Astrophysics Data System (ADS)

    Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko

    2016-01-01

    We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.

  7. How to use dynamic light scattering to improve the likelihood of growing macromolecular crystals.

    PubMed

    Borgstahl, Gloria E O

    2007-01-01

    Dynamic light scattering (DLS) has become one of the most useful diagnostic tools for crystallization. The main purpose of using DLS in crystal screening is to help the investigator understand the size distribution, stability, and aggregation state of macromolecules in solution. It can also be used to understand how experimental variables influence aggregation. With commercially available instruments, DLS is easy to perform, and most of the sample is recoverable. Most usefully, the homogeneity or monodispersity of a sample, as measured by DLS, can be predictive of crystallizability.

  8. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  9. Maximum-likelihood estimation of familial correlations from multivariate quantitative data on pedigrees: a general method and examples.

    PubMed Central

    Rao, D C; Vogler, G P; McGue, M; Russell, J M

    1987-01-01

    A general method for maximum-likelihood estimation of familial correlations from pedigree data is presented. The method is applicable to any type of data structure, including pedigrees in which variable numbers of individuals are present within classes of relatives, data in which multiple phenotypic measures are obtained on each individual, and multiple group analyses in which some correlations are equated across groups. The method is applied to data on high-density lipoprotein cholesterol and total cholesterol levels obtained from participants in the Swedish Twin Family Study. Results indicate that there is strong familial resemblance for both traits but little cross-trait resemblance. PMID:3687943

  10. A method for selecting M dwarfs with an increased likelihood of unresolved ultracool companionship

    NASA Astrophysics Data System (ADS)

    Cook, N. J.; Pinfield, D. J.; Marocco, F.; Burningham, B.; Jones, H. R. A.; Frith, J.; Zhong, J.; Luo, A. L.; Qi, Z. X.; Lucas, P. W.; Gromadzki, M.; Day-Jones, A. C.; Kurtev, R. G.; Guo, Y. X.; Wang, Y. F.; Bai, Y.; Yi, Z. P.; Smart, R. L.

    2016-04-01

    Locating ultracool companions to M dwarfs is important for constraining low-mass formation models, the measurement of substellar dynamical masses and radii, and for testing ultracool evolutionary models. We present an optimized method for identifying M dwarfs which may have unresolved ultracool companions. We construct a catalogue of 440 694 M dwarf candidates, from Wide-Field Infrared Survey Explorer, Two Micron All-Sky Survey and Sloan Digital Sky Survey, based on optical- and near-infrared colours and reduced proper motion. With strict reddening, photometric and quality constraints we isolate a subsample of 36 898 M dwarfs and search for possible mid-infrared M dwarf + ultracool dwarf candidates by comparing M dwarfs which have similar optical/near-infrared colours (chosen for their sensitivity to effective temperature and metallicity). We present 1082 M dwarf + ultracool dwarf candidates for follow-up. Using simulated ultracool dwarf companions to M dwarfs, we estimate that the occurrence of unresolved ultracool companions amongst our M dwarf + ultracool dwarf candidates should be at least four times the average for our full M dwarf catalogue. We discuss possible contamination and bias and predict yields of candidates based on our simulations.

  11. DREAM3: Network Inference Using Dynamic Context Likelihood of Relatedness and the Inferelator

    DTIC Science & Technology

    2010-03-22

    Methods Enzymol 350: 469–483. 47. Johnson DS, Mortazavi A, Myers RM, Wold B (2007) Genome- wide mapping of in vivo protein-dna interactions. Science ...Mathematics, Courant Institute of Mathematical Sciences , New York University, New York, New York, United States of America, 4 Department of Computer Science ...Courant Institute of Mathematical Sciences , New York University, New York, New York, United States of America Abstract Background: Many current works

  12. A maximum likelihood method for high resolution proton radiography/proton CT.

    PubMed

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao

    2016-12-07

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm(-1) to 4.53 lp cm(-1) in the 200 MeV beam and from 3.49 lp cm(-1) to 5.76 lp cm(-1) in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm(-1) to 5.76 lp cm(-1)) or conical beam (from 3.49 lp cm(-1) to 5.56 lp cm(-1)). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm(-1) for the parallel beam and from 3.03 to 5.15 lp cm(-1) for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  13. A maximum likelihood method for high resolution proton radiography/proton CT

    NASA Astrophysics Data System (ADS)

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao

    2016-12-01

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  14. Plate dynamical mechanisms as constraints on the likelihood of earthquake precursors in the ionosphere

    NASA Astrophysics Data System (ADS)

    Osmaston, Miles

    2013-04-01

    In my oral(?) contribution to this session [1] I use my studies of the fundamental physics of gravitation to derive a reason for expecting the vertical gradient of electron density (= radial electric field) in the ionosphere to be closely affected by another field, directly associated with the ordinary gravitational potential (g) present at the Earth's surface. I have called that other field the Gravity-Electric (G-E) field. A calibration of this linkage relationship could be provided by noting corresponding co-seismic changes in (g) and in the ionosphere when, for example, a major normal-fault slippage occurs. But we are here concerned with precursory changes. This means we are looking for mechanisms which, on suitably short timescales, would generate pre-quake elastic deformation that changes the local (g). This poster supplements my talk by noting, for more relaxed discussion, what I see as potentially relevant plate dynamical mechanisms. Timescale constraints. If monitoring for ionospheric precursors is on only short timescales, their detectability is limited to correspondingly tectonically active regions. But as our monitoring becomes more precise and over longer terms, this constraint will relax. Most areas of the Earth are undergoing very slow heating or cooling and corresponding volume or epeirogenic change; major earthquakes can result but we won't have detected any accumulating ionospheric precursor. Transcurrent faulting. In principle, slip on a straight fault, even in a stick-slip manner, should produce little vertical deformation, but a kink, such as has caused the Transverse Ranges on the San Andreas Fault, would seem worth monitoring for precursory build-up in the ionosphere. Plate closure - subducting plate downbend. The traditionally presumed elastic flexure downbend mechanism is incorrect. 'Seismic coupling' has long been recognized by seismologists, invoking the repeated occurrence of 'asperities' to temporarily lock subduction and allow stress

  15. Power Analysis for the Likelihood-Ratio Test in Latent Markov Models: Shortcutting the Bootstrap p-Value-Based Method.

    PubMed

    Gudicha, Dereje W; Schmittmann, Verena D; Tekle, Fetene B; Vermunt, Jeroen K

    2016-01-01

    The latent Markov (LM) model is a popular method for identifying distinct unobserved states and transitions between these states over time in longitudinally observed responses. The bootstrap likelihood-ratio (BLR) test yields the most rigorous test for determining the number of latent states, yet little is known about power analysis for this test. Power could be computed as the proportion of the bootstrap p values (PBP) for which the null hypothesis is rejected. This requires performing the full bootstrap procedure for a large number of samples generated from the model under the alternative hypothesis, which is computationally infeasible in most situations. This article presents a computationally feasible shortcut method for power computation for the BLR test. The shortcut method involves the following simple steps: (1) obtaining the parameters of the model under the null hypothesis, (2) constructing the empirical distributions of the likelihood ratio under the null and alternative hypotheses via Monte Carlo simulations, and (3) using these empirical distributions to compute the power. We evaluate the performance of the shortcut method by comparing it to the PBP method and, moreover, show how the shortcut method can be used for sample-size determination.

  16. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  17. Weighted Hurdle Regression Method for Joint Modeling of Cardiovascular Events Likelihood and Rate in the U.S. Dialysis Population

    PubMed Central

    Şentürk, Damla; Dalrymple, Lorien S.; Mu, Yi; Nguyen, Danh V.

    2014-01-01

    SUMMARY We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (a) the probability of cardiovascular events, a binary process and (b) the rate of events once the realization is positive - when the ‘hurdle’ is crossed - using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting. PMID:24930810

  18. Weighted hurdle regression method for joint modeling of cardiovascular events likelihood and rate in the US dialysis population.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V

    2014-11-10

    We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting.

  19. Methods of applied dynamics

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.; Wilson, H. B.

    1991-01-01

    The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.

  20. A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt

    NASA Astrophysics Data System (ADS)

    Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.

    2014-12-01

    The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.

  1. Calibrating CAT Pools and Online Pretest Items Using Marginal Maximum Likelihood Methods.

    ERIC Educational Resources Information Center

    Pommerich, Mary; Segall, Daniel O.

    Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…

  2. Extended likelihood ratio test-based methods for signal detection in a drug class with application to FDA's adverse event reporting system database.

    PubMed

    Zhao, Yueqin; Yi, Min; Tiwari, Ram C

    2016-05-02

    A likelihood ratio test, recently developed for the detection of signals of adverse events for a drug of interest in the FDA Adverse Events Reporting System database, is extended to detect signals of adverse events simultaneously for all the drugs in a drug class. The extended likelihood ratio test methods, based on Poisson model (Ext-LRT) and zero-inflated Poisson model (Ext-ZIP-LRT), are discussed and are analytically shown, like the likelihood ratio test method, to control the type-I error and false discovery rate. Simulation studies are performed to evaluate the performance characteristics of Ext-LRT and Ext-ZIP-LRT. The proposed methods are applied to the Gadolinium drug class in FAERS database. An in-house likelihood ratio test tool, incorporating the Ext-LRT methodology, is being developed in the Food and Drug Administration.

  3. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  4. MOD-score analysis with simple pedigrees: an overview of likelihood-based linkage methods.

    PubMed

    Strauch, Konstantin

    2007-01-01

    A MOD-score analysis, in which the parametric LOD score is maximized with respect to the trait-model parameters, can be a powerful method for the mapping of complex traits. With affected sib pairs, it has been shown before that MOD scores asymptotically follow a mixture of chi(2) distributions with 2, 1 and 0 degrees of freedom under the null hypothesis of no linkage. In that context, a MOD-score analysis yields some (albeit limited) information regarding the trait-model parameters, and there is a chance for an increased power compared to a simple LOD-score analysis. Here, it is shown that with unilineal affected relative pairs, MOD scores asymptotically follow a mixture of chi(2) distributions with 1 and 0 degrees of freedom under the null hypothesis, that is, the same distribution as followed by simple LOD scores. No information regarding the trait model can be obtained in this setting, and no power is gained when compared to a LOD-score analysis. An outlook to larger pedigrees is given. The number of degrees of freedom underlying the null distribution of MOD scores, that depends on the type of pedigrees studied, corresponds to the number of explored dimensions related to power and to the number of parameters that can jointly be estimated. Copyright 2007 S. Karger AG, Basel.

  5. New methods to assess severity and likelihood of urban flood risk from intense rainfall

    NASA Astrophysics Data System (ADS)

    Fewtrell, Tim; Foote, Matt; Bates, Paul; Ntelekos, Alexandros

    2010-05-01

    the construction of appropriate probabilistic flood models. This paper will describe new research being undertaken to assess the practicality of ultra-high resolution, ground based laser-scanner data for flood modelling in urban centres, using new hydraulic propagation methods to determine the feasibility of such data to be applied within stochastic event models. Results from the collection of ‘point cloud' data collected from a mobile terrestrial laser-scanner system in a key urban centre, combined with appropriate datasets, will be summarized here and an initial assessment of the potential for the use of such data in stochastic event sets will be made. Conclusions are drawn from comparisons with previous studies and underlying DEM products of similar resolutions in terms of computational time, flood extent and flood depth. Based on the above, the study provides some current recommendations on the most appropriate resolution of input data for urban hydraulic modelling.

  6. Insufficient ct data reconstruction based on directional total variation (dtv) regularized maximum likelihood expectation maximization (mlem) method

    NASA Astrophysics Data System (ADS)

    Islam, Fahima Fahmida

    Sparse tomography is an efficient technique which saves time as well as minimizes cost. However, due to few angular data it implies the image reconstruction problem as ill-posed. In the ill posed problem, even with exact data constraints, the inversion cannot be uniquely performed. Therefore, selection of suitable method to optimize the reconstruction problems plays an important role in sparse data CT. Use of regularization function is a well-known method to control the artifacts in limited angle data acquisition. In this work, we propose directional total variation regularized ordered subset (OS) type image reconstruction method for neutron limited data CT. Total variation (TV) regularization works as edge preserving regularization which not only preserves the sharp edge but also reduces many of the artifacts that are very common in limited data CT. However TV itself is not direction dependent. Therefore, TV is not very suitable for images with a dominant direction. The images with dominant direction it is important to know the total variation at certain direction. Hence, here a directional TV is used as prior term. TV regularization assumes the constraint of piecewise smoothness. As the original image is not piece wise constant image, sparsifying transform is used to convert the image in to sparse image or piecewise constant image. Along with this regularized function (D TV) the likelihood function which is adapted as objective function. To optimize this objective function a OS type algorithm is used. Generally there are two methods available to make OS method convergent. This work proposes OS type directional TV regularized likelihood reconstruction method which yields fast convergence as well as good quality image. Initial iteration starts with the filtered back projection (FBP) reconstructed image. The indication of convergence is determined by the convergence index between two successive reconstructed images. The quality of the image is assessed by showing

  7. A powerful likelihood method for the analysis of linkage disequilibrium between trait loci and one or more polymorphic marker loci

    SciTech Connect

    Terwilliger, J.D.

    1995-03-01

    Historically, most methods for detecting linkage disequilibrium were designed for use with diallelic marker loci, for which the analysis is straightforward. With the advent of polymorphic markers with many alleles, the normal approach to their analysis has been either to extend the methodology for two-allele systems (leading to an increase in df and to a corresponding loss of power) or to select the allele believed to be associated and then collapse the other alleles, reducing, in a biased way, the locus to a diallelic system. I propose a likelihood-based approach to testing for linkage disequilibrium, an approach that becomes more conservative as the number of alleles increases, and as the number of markers considered jointly increases in a multipoint test for linkage disequilibrium, while maintaining high power. Properties of this method for detecting associations and fine mapping the location of disease traits are investigated. It is found to be, in general, more powerful than conventional methods, and it provides a tractable framework for the fine mapping of new disease loci. Application to the cystic fibrosis data of Kerem et al. is included to illustrate the method. 12 refs., 4 figs., 4 tabs.

  8. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus

    PubMed Central

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-01-01

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579

  9. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus.

    PubMed

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-04-20

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.

  10. Evaluation of Bayesian source estimation methods with Prairie Grass observations and Gaussian plume model: A comparison of likelihood functions and distance measures

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko

    2017-03-01

    Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.

  11. The Likelihood Function and Likelihood Statistics

    NASA Astrophysics Data System (ADS)

    Robinson, Edward L.

    2016-01-01

    The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.

  12. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  13. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    PubMed

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  14. Fluid dynamics test method

    NASA Technical Reports Server (NTRS)

    Gayman, W. H.

    1974-01-01

    Test method and apparatus determine fluid effective mass and damping in frequency range where effective mass may be considered as total mass less sum of slosh masses. Apparatus is designed so test tank and its mounting yoke are supported from structural test wall by series of flexures.

  15. A maximum-likelihood method to correct for allelic dropout in microsatellite data with no replicate genotypes.

    PubMed

    Wang, Chaolong; Schroeder, Kari B; Rosenberg, Noah A

    2012-10-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy-Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  16. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    PubMed Central

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  17. Cosmic bulk flows on 50 h-1 Mpc scales: a Bayesian hyper-parameter method and multishell likelihood analysis

    NASA Astrophysics Data System (ADS)

    Ma, Yin-Zhe; Scott, Douglas

    2013-01-01

    It has been argued recently that the galaxy peculiar velocity field provides evidence of excessive power on scales of 50 h-1 Mpc, which seems to be inconsistent with the standard Λ cold dark matter (ΛCDM) cosmological model. We discuss several assumptions and conventions used in studies of the large-scale bulk flow to check whether this claim is robust under a variety of conditions. Rather than using a composite catalogue we select samples from the SN, ENEAR, Spiral Field I-band Survey (SFI++) and First Amendment Supernovae (A1SN) catalogues, and correct for Malmquist bias in each according to the IRAS PSCz density field. We also use slightly different assumptions about the small-scale velocity dispersion and the parametrization of the matter power spectrum when calculating the variance of the bulk flow. By combining the likelihood of individual catalogues using a Bayesian hyper-parameter method, we find that the joint likelihood of the amplitude parameter gives σ8 = 0.65+ 0.47- 0.35 (68 per cent confidence region), which is entirely consistent with the ΛCDM model. In addition, the bulk flow magnitude, v ˜ 310 km s-1, and direction, (l, b) ˜ (280° ± 8°, 5.1° ± 6°), found by each of the catalogues are all consistent with each other, and with the bulk flow results from most previous studies. Furthermore, the bulk flow velocities in different shells of the surveys constrain (σ8, Ωm) to be (1.01+ 0.26- 0.20, 0.31+ 0.28- 0.14) for SFI++ and (1.04+ 0.32- 0.24, 0.28+ 0.30- 0.14) for ENEAR, which are consistent with the 7-year Wilkinson and Microwave Anisotropy Probe (WMAP7) best-fitting values. We finally discuss the differences between our conclusions and those of the studies claiming the largest bulk flows.

  18. SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT

    SciTech Connect

    Collins-Fekete, C; Schulte, R; Beaulieu, L; Seco, J

    2016-06-15

    Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. An anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The

  19. The evolution of autodigestion in the mushroom family Psathyrellaceae (Agaricales) inferred from Maximum Likelihood and Bayesian methods.

    PubMed

    Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba

    2010-12-01

    Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and

  1. Characterization of a likelihood based method and effects of markers informativeness in evaluation of admixture and population group assignment

    PubMed Central

    Yang, Bao-Zhu; Zhao, Hongyu; Kranzler, Henry R; Gelernter, Joel

    2005-01-01

    Background Detection and evaluation of population stratification are crucial issues in the conduct of genetic association studies. Statistical approaches useful for understanding these issues have been proposed; these methods rely on information gained from genotyping sets of markers that reflect population ancestry. Before using these methods, a set of markers informative for differentiating population genetic substructure (PGS) is necessary. We have previously evaluated the performance of a Bayesian clustering method implemented in the software STRUCTURE in detecting PGS with a particular informative marker set. In this study, we implemented a likelihood based method (LBM) in evaluating the informativeness of the same selected marker panel, with respect to assessing potential for stratification in samples of European Americans (EAs) and African Americans (AAs), that are known to be admixed. LBM calculates the probability of a set of genotypes based on observations in a reference population with known specific allele frequencies for each marker, assuming Hardy Weinberg equilibrium (HWE) for each marker and linkage equilibrium among markers. Results In EAs, the assignment accuracy by LBM exceeded 99% using the most efficient marker FY, and reached perfect assignment accuracy using the 10 most efficient markers excluding FY. In AAs, the assignment accuracy reached 96.4% using FY, and >95% when using at least the 9 most efficient markers. The comparison of the observed and reference allele frequencies (which were derived from previous publications and public databases) shows that allele frequencies observed in EAs matched the reference group more accurately than allele frequencies observed in AAs. As a result, the LBM performed better in EAs than AAs, as might be expected given the dependence of LBMs on prior knowledge of allele frequencies. Performance was not dependent on sample size. Conclusion The performance of the LBM depends on the efficiency and number of

  2. Enhancing resolution and contrast in second-harmonic generation microscopy using an advanced maximum likelihood estimation restoration method

    NASA Astrophysics Data System (ADS)

    Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.

    2017-02-01

    Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.

  3. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  4. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  5. Comparing Performances (Type I Error and Power) of IRT Likelihood Ratio SIBTEST and Mantel-Haenszel Methods in the Determination of Differential Item Functioning

    ERIC Educational Resources Information Center

    Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya

    2014-01-01

    This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…

  6. The Use of Dynamic Stochastic Social Behavior Models to Produce Likelihood Functions for Risk Modeling of Proliferation and Terrorist Attacks

    SciTech Connect

    Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.; Whitney, Paul D.; Coles, Garill A.; Henderson, Cindy L.; Wolf, Katherine E.; Hoopes, Bonnie L.

    2008-12-01

    The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.

  7. Maximum likelihood signature estimation

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1975-01-01

    Maximum-likelihood estimates are discussed which are based on an unlabeled sample of observations, of unknown parameters in a mixture of normal distributions. Several successive approximation procedures for obtaining such maximum-likelihood estimates are described. These procedures, which are theoretically justified by the local contractibility of certain maps, are designed to take advantage of good initial estimates of the unknown parameters. They can be applied to the signature extension problem, in which good initial estimates of the unknown parameters are obtained from segments which are geographically near the segments from which the unlabeled samples are taken. Additional problems to which these methods are applicable include: estimation of proportions and adaptive classification (estimation of mean signatures and covariances).

  8. Stepwise Signal Extraction via Marginal Likelihood

    PubMed Central

    Du, Chao; Kao, Chu-Lan Michael

    2015-01-01

    This paper studies the estimation of stepwise signal. To determine the number and locations of change-points of the stepwise signal, we formulate a maximum marginal likelihood estimator, which can be computed with a quadratic cost using dynamic programming. We carry out extensive investigation on the choice of the prior distribution and study the asymptotic properties of the maximum marginal likelihood estimator. We propose to treat each possible set of change-points equally and adopt an empirical Bayes approach to specify the prior distribution of segment parameters. Detailed simulation study is performed to compare the effectiveness of this method with other existing methods. We demonstrate our method on single-molecule enzyme reaction data and on DNA array CGH data. Our study shows that this method is applicable to a wide range of models and offers appealing results in practice. PMID:27212739

  9. Identifying change in the likelihood of violent recidivism: causal dynamic risk factors in the OASys violence predictor.

    PubMed

    Howard, Philip D; Dixon, Louise

    2013-06-01

    Recent studies of multiwave risk assessment have investigated the association between changes in risk factors and violent recidivism. This study analyzed a large multiwave data set of English and Welsh offenders (N = 196,493), assessed in realistic correctional conditions using the static/dynamic Offender Assessment System (OASys). It aimed to compare the predictive validity of the OASys Violence Predictor (OVP) under mandated repeated assessment and one-time initial assessment conditions. Scores on 5 of OVP's 7 purportedly dynamic risk factors changed in 6 to 15% of pairs of successive assessments, whereas the other 2 seldom changed. Violent reoffenders had higher initial total and dynamic OVP scores than nonreoffenders, yet nonreoffenders' dynamic scores fell by significantly more between initial and final assessment. OVP scores from the current assessment achieved greater predictive validity than those from the initial assessment. Cox regression models showed that, for total OVP scores and most risk factors, both the initial score and the change in score from initial to current assessment significantly predicted reoffending. These results consistently showed that OVP includes several causal dynamic risk factors for violent recidivism, which can be measured reliably in operational settings. This adds to the evidence base that links changes in risk factors to changes in future reoffending risk and links the use of repeated assessments to incremental improvements in predictive validity. Further research could quantify the costs and benefits of reassessment in correctional practice, study associations between treatment and dynamic risk factors, and separate the effects of improvements and deteriorations in dynamic risk.

  10. Profile Likelihood and Incomplete Data.

    PubMed

    Zhang, Zhiwei

    2010-04-01

    According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.

  11. Dynamic Method for Identifying Collected Sample Mass

    NASA Technical Reports Server (NTRS)

    Carson, John

    2008-01-01

    G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  12. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  13. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  14. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  15. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  16. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  17. Bayesian computation via empirical likelihood

    PubMed Central

    Mengersen, Kerrie L.; Pudlo, Pierre; Robert, Christian P.

    2013-01-01

    Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models. PMID:23297233

  18. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    PubMed

    Molenaar, P C; Nesselroade, J R

    1998-07-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations -requires special modeling techniques. The dynamic factor model (DFM), which is a generalization of the traditional common factor model, has been proposed by Molenaar (1985) for systematically extracting information from multivariate time- series via latent variable modeling. Implementation of the DFM model has taken several forms, one of which involves specifying it as a covariance-structure model and estimating its parameters from a block-Toeplitz matrix derived from the multivariate time-ser~es. We compare two methods for estimating DFM parameters within a covariance-structure framework - pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation - by means of a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates of comparable precision, but only the ADF method gives standard errors and chi-square statistics that appear to be consistent. The relative ordering of the values of all estimates appears to be very similar across methods. When the manifest time-series is relatively short, the two methods appear to perform about equally well.

  19. Integration methods for molecular dynamics

    SciTech Connect

    Leimkuhler, B.J.; Reich, S.; Skeel, R.D.

    1996-12-31

    Classical molecular dynamics simulation of a macromolecule requires the use of an efficient time-stepping scheme that can faithfully approximate the dynamics over many thousands of timesteps. Because these problems are highly nonlinear, accurate approximation of a particular solution trajectory on meaningful time intervals is neither obtainable nor desired, but some restrictions, such as symplecticness, can be imposed on the discretization which tend to imply good long term behavior. The presence of a variety of types and strengths of interatom potentials in standard molecular models places severe restrictions on the timestep for numerical integration used in explicit integration schemes, so much recent research has concentrated on the search for alternatives that possess (1) proper dynamical properties, and (2) a relative insensitivity to the fastest components of the dynamics. We survey several recent approaches. 48 refs., 2 figs.

  20. Sequence comparison and phylogenetic analysis by the Maximum Likelihood method of ribosome-inactivating proteins from angiosperms.

    PubMed

    Di Maro, Antimo; Citores, Lucía; Russo, Rosita; Iglesias, Rosario; Ferreras, José Miguel

    2014-08-01

    Ribosome-inactivating proteins (RIPs) from angiosperms are rRNA N-glycosidases that have been proposed as defence proteins against virus and fungi. They have been classified as type 1 RIPs, consisting of single-chain proteins, and type 2 RIPs, consisting of an A chain with RIP properties covalently linked to a B chain with lectin properties. In this work we have carried out a broad search of RIP sequence data banks from angiosperms in order to study their main structural characteristics and phylogenetic evolution. The comparison of the sequences revealed the presence, outside of the active site, of a novel structure that might be involved in the internal protein dynamics linked to enzyme catalysis. Also the B-chains presented another conserved structure that might function either supporting the beta-trefoil structure or in the communication between both sugar-binding sites. A systematic phylogenetic analysis of RIP sequences revealed that the most primitive type 1 RIPs were similar to that of the actual monocots (Poaceae and Asparagaceae). The primitive RIPs evolved to the dicot type 1 related RIPs (like those from Caryophyllales, Lamiales and Euphorbiales). The gene of a type 1 RIP related with the actual Euphorbiaceae type 1 RIPs fused with a double beta trefoil lectin gene similar to the actual Cucurbitaceae lectins to generate the type 2 RIPs and finally this gene underwent deletions rendering either type 1 RIPs (like those from Cucurbitaceae, Rosaceae and Iridaceae) or lectins without A chain (like those from Adoxaceae).

  1. Maximum Likelihood Methods in Treating Outliers and Symmetrically Heavy-Tailed Distributions for Nonlinear Structural Equation Models with Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Xia, Ye-Mao

    2006-01-01

    By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…

  2. Novel methods for molecular dynamics simulations.

    PubMed

    Elber, R

    1996-04-01

    In the past year, significant progress was made in the development of molecular dynamics methods for the liquid phase and for biological macromolecules. Specifically, faster algorithms to pursue molecular dynamics simulations were introduced and advances were made in the design of new optimization algorithms guided by molecular dynamics protocols. A technique to calculate the quantum spectra of protein vibrations was introduced.

  3. Inference of Gene Flow in the Process of Speciation: An Efficient Maximum-Likelihood Method for the Isolation-with-Initial-Migration Model

    PubMed Central

    Costa, Rui J.; Wilkinson-Herbots, Hilde

    2017-01-01

    The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727

  4. The phylogenetic likelihood library.

    PubMed

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).

  5. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  6. Optical methods in fault dynamics

    NASA Astrophysics Data System (ADS)

    Uenishi, K.; Rossmanith, H. P.

    2003-10-01

    The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse-interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the "damage belt" in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.

  7. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    PubMed

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  8. Photon Counting Data Analysis: Application of the Maximum Likelihood and Related Methods for the Determination of Lifetimes in Mixtures of Rose Bengal and Rhodamine B

    SciTech Connect

    Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; Song, Xueyu

    2016-12-12

    It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total number of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.

  9. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  10. Spectral methods in fluid dynamics

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1986-01-01

    Fundamental aspects of spectral methods are introduced. Recent developments in spectral methods are reviewed with an emphasis on collocation techniques. Their applications to both compressible and incompressible flows, to viscous as well as inviscid flows, and also to chemically reacting flows are surveyed. The key role that these methods play in the simulation of stability, transition, and turbulence is brought out. A perspective is provided on some of the obstacles that prohibit a wider use of these methods, and how these obstacles are being overcome.

  11. [Contrastive study on dynamic spectrum extraction method].

    PubMed

    Li, Gang; Zhou, Mei; Wang, Hui-quan; Xiong, Chan; Lin, Ling

    2012-05-01

    Dynamic spectrum method extracts the absorbance of the artery pulse blood with some wavelengths. The method can reduce some influence such as measurement condition, individual difference and spectrum overlap. It is a new way for noninvasive blood components detection However, how to choose a dynamic spectrum extraction method is one of the key links for the weak ingredient spectrum signal. Now there are two methods to extract the dynamic spectral signal-frequency domain analysis and single-trial estimation in time domain In the present research, comparison analysis and research on the two methods were carrued out completely. Theoretical analysis and experimental results show that the two methods extract the dynamic spectrum from different angles. But they are the same in essence--the basic principle of dynamic spectrum, the signal statistical and average properties. With the pulse wave of relative stable period and amplitude, high precision dynamic spectrum can be obtained by the two methods. With the unstable pulse wave due to the influence of finger shake and contact-pressure change, the dynamic spectrum extracted by single-trial estimation is more accurate than the one by frequecy domain analysis.

  12. Fast likelihood classification

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1991-01-01

    A multistage classification that reduces the processing time substantially is proposed. This classification algorithm consists of several stages, and in each stage likelihood values of classes are calculated and compared. If a class has a likelihood value less than a threshold, the class if truncated at that stage as an unlikely class, thus reducing the number of classes for which likelihood values are to be calculated at the next stage. Thus a host of classes can be truncated using a small portion of the total features at early stages, resulting in substantial reduction of computing time. Several truncation criteria are developed, and the relationship between thresholds and the error caused by the truncation is investigated. Experiments show that the proposed algorithm reduces the processing time by the factor of 3-7, depending on the number of classes and features, while maintaining essentially the same accuracies.

  13. Simulation for position determination of distal and proximal edges for SOBP irradiation in hadron therapy by using the maximum likelihood estimation method

    NASA Astrophysics Data System (ADS)

    Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro

    2005-12-01

    In radiation therapy with hadron beams, conformal irradiation to a tumour can be achieved by using the properties of incident ions such as the high dose concentration around the Bragg peak. For the effective utilization of such properties, it is necessary to evaluate the volume irradiated with hadron beams and the deposited dose distribution in a patient's body. Several methods have been proposed for this purpose, one of which uses the positron emitters generated through fragmentation reactions between incident ions and target nuclei. In the previous paper, we showed that the maximum likelihood estimation (MLE) method could be applicable to the estimation of beam end-point from the measured positron emitting activity distribution for mono-energetic beam irradiations. In a practical treatment, a spread-out Bragg peak (SOBP) beam is used to achieve a uniform biological dose distribution in the whole target volume. Therefore, in the present paper, we proposed to extend the MLE method to estimations of the position of the distal and proximal edges of the SOBP from the detected annihilation gamma ray distribution. We confirmed the effectiveness of the method by means of simulations. Although polyethylene was adopted as a substitute for a soft tissue target in validating the method, the proposed method is equally applicable to general cases, provided that the reaction cross sections between the incident ions and the target nuclei are known. The relative advantage of incident beam species to determine the position of the distal and the proximal edges was compared. Furthermore, we ascertained the validity of applying the MLE method to determinations of the position of the distal and the proximal edges of an SOBP by simulations and we gave a physical explanation of the distal and the proximal information.

  14. Metrics for expert judgement in volcanic hazard assessment: comparing the Cooke classical model with a new method based on individual performance likelihood

    NASA Astrophysics Data System (ADS)

    Flandoli, F.; Giorgi, E.; Aspinall, W. A.; Neri, A.

    2009-04-01

    Expert elicitation is a method to obtain estimates for variables of interest when data is sparse or ambiguous. A team of experts is created and each is asked to provide three values for each target variable (typically the 5% quantile, the median, and the 95% quantile). If some weight can be associated with each expert, then different opinions can be pooled to generate a weighted mean, thus providing an estimate of the uncertain variable. The key challenge is to assign a proper weight to each expert. To determine this weight empirically, the experts can be asked a set of 'seed' questions, whose values are known by the analyst (facilitator). In this approach, the experts provide three separate quantile values for each question, and the expert's capability of quantifying uncertainty can be evaluated. For instance, the Cooke classical model quantifies the collective scientific uncertainty through an expert scoring scheme by which weights are ascribed to individual experts on the basis of empirically determined calibration and informativeness scores obtained from a probability analysis of individual performances. In our work, we compare such a method to a new algorithm in which the calibration score is substituted by a one based on the likelihood of observing these expert performances. The simple idea behind this is that of rewarding more strongly those experts whose seed item median values are systematically closer to the true values. Given the three quantile values provided by every expert for each question, we fit a Beta distribution to each test item response, and compute the probability that the location parameter of that distribution corresponds to the real value, by chance. For each expert, the geometric mean of these probabilities is computed as the likelihood factor, L(e), of the expert, thus providing an alternative ‘calibration' score. An information factor, I(e), is also computed as arithmetic mean of the relative entropies of the expert's distributions

  15. Numerical methods for molecular dynamics

    SciTech Connect

    Skeel, R.D.

    1991-01-01

    This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.

  16. A Method and On-Line Tool for Maximum Likelihood Calibration of Immunoblots and Other Measurements That Are Quantified in Batches.

    PubMed

    Andrews, Steven S; Rutherford, Suzannah

    2016-01-01

    Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a "1-step calibration method" reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional "2-step" method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use.

  17. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  18. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    PubMed

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-09-12

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  19. Photon Counting Data Analysis: Application of the Maximum Likelihood and Related Methods for the Determination of Lifetimes in Mixtures of Rose Bengal and Rhodamine B

    DOE PAGES

    Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...

    2016-12-12

    It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less

  20. A Method and On-Line Tool for Maximum Likelihood Calibration of Immunoblots and Other Measurements That Are Quantified in Batches

    PubMed Central

    Andrews, Steven S.; Rutherford, Suzannah

    2016-01-01

    Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a “1-step calibration method” reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional “2-step” method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use. PMID:26908370

  1. Phylogeny of the cycads based on multiple single-copy nuclear genes: congruence of concatenated parsimony, likelihood and species tree inference methods

    PubMed Central

    Salas-Leiva, Dayana E.; Meerow, Alan W.; Calonje, Michael; Griffith, M. Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W.; Lewis, Carl E.; Namoff, Sandra

    2013-01-01

    Background and aims Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree–species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. Methods DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree–species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Key Results Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia–Lepidozamia–Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. Conclusions A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial

  2. Empirical aspects of the Whittle-based maximum likelihood method in jointly estimating seasonal and non-seasonal fractional integration parameters

    NASA Astrophysics Data System (ADS)

    Marques, G. O. L. C.

    2011-01-01

    This paper addresses the efficiency of the maximum likelihood ( ML) method in jointly estimating the fractional integration parameters ds and d, respectively associated with seasonal and non-seasonal long-memory components in discrete stochastic processes. The influence of the size of non-seasonal parameter over seasonal parameter estimation, and vice versa, was analyzed in the space d×ds∈(0,1)×(0,1) by using mean squared error statistics MSE(d) and MSE(dˆ). This study was based on Monte Carlo simulation experiments using the ML estimator with Whittle’s approximation in the frequency domain. Numerical results revealed that efficiency in jointly estimating each integration parameter is affected in different ways by their sizes: as ds and d increase simultaneously to 1, MSE(d) and MSE(dˆ) become larger; however, effects on MSE(d) are much stronger than the effects on MSE(dˆ). Moreover, as each parameter tends individually to 1, MSE(dˆ) becomes larger, but MSE(d) is barely influenced.

  3. Age‐period‐cohort analysis of cervical cancer incidence in Hong Kong from 1972 to 2001 using maximum likelihood and Bayesian methods

    PubMed Central

    Leung, Gabriel M; Woo, Pauline P S; McGhee, Sarah M; Cheung, Annie N Y; Fan, Susan; Mang, Oscar; Thach, Thuan Q; Ngan, Hextan Y S

    2006-01-01

    Objective To examine the secular effects of opportunistic screening for cervical cancer in a rich, developed community where most other such populations have long adopted organised screening. Design, setting, and participants The analysis was based on 15 140 cases of invasive cervical cancer from 1972 to 2001. The effects of chronological age, time period, and birth cohort were decomposed using both maximum likelihood and Bayesian methods. Results The overall age adjusted incidence decreased from 24.9 in 1972–74 to 9.5 per 100,000 in 1999–2001, in a log‐linear fashion, yielding an average annual reduction of 4.0% (p<0.001) during the 30 year period. There were two second order and thus identifiable changes: (1) around the mid‐1920s cohort curve representing an age‐period interaction masquerading as a cohort change that denotes the first availability of Pap testing during the 1960s concentrated among women in their 40s; (2) a hook around the calendar years 1982–83 when cervical cytology became a standard screening test for pregnant women. Conclusions Hong Kong's cervical cancer rates have declined since Pap tests first became available in the 1960s, most probably because of increasing population coverage over time and in successive generations in a haphazard fashion and punctuated by the systematic introduction of routine cytology as part of antenatal care in the 1980s. PMID:16840762

  4. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    PubMed

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Phylogeny of the cycads based on multiple single-copy nuclear genes: congruence of concatenated parsimony, likelihood and species tree inference methods.

    PubMed

    Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra

    2013-11-01

    Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree-species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia-Lepidozamia-Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial classification of Zamiaceae.

  6. Particle dynamics modeling methods for colloid suspensions

    NASA Astrophysics Data System (ADS)

    Bolintineanu, Dan S.; Grest, Gary S.; Lechman, Jeremy B.; Pierce, Flint; Plimpton, Steven J.; Schunk, P. Randall

    2014-09-01

    We present a review and critique of several methods for the simulation of the dynamics of colloidal suspensions at the mesoscale. We focus particularly on simulation techniques for hydrodynamic interactions, including implicit solvents (Fast Lubrication Dynamics, an approximation to Stokesian Dynamics) and explicit/particle-based solvents (Multi-Particle Collision Dynamics and Dissipative Particle Dynamics). Several variants of each method are compared quantitatively for the canonical system of monodisperse hard spheres, with a particular focus on diffusion characteristics, as well as shear rheology and microstructure. In all cases, we attempt to match the relevant properties of a well-characterized solvent, which turns out to be challenging for the explicit solvent models. Reasonable quantitative agreement is observed among all methods, but overall the Fast Lubrication Dynamics technique shows the best accuracy and performance. We also devote significant discussion to the extension of these methods to more complex situations of interest in industrial applications, including models for non-Newtonian solvent rheology, non-spherical particles, drying and curing of solvent and flows in complex geometries. This work identifies research challenges and motivates future efforts to develop techniques for quantitative, predictive simulations of industrially relevant colloidal suspension processes.

  7. List-mode likelihood

    PubMed Central

    Barrett, Harrison H.; White, Timothy; Parra, Lucas C.

    2010-01-01

    As photon-counting imaging systems become more complex, there is a trend toward measuring more attributes of each individual event. In various imaging systems the attributes can include several position variables, time variables, and energies. If more than about four attributes are measured for each event, it is not practical to record the data in an image matrix. Instead it is more efficient to use a simple list where every attribute is stored for every event. It is the purpose of this paper to discuss the concept of likelihood for such list-mode data. We present expressions for list-mode likelihood with an arbitrary number of attributes per photon and for both preset counts and preset time. Maximization of this likelihood can lead to a practical reconstruction algorithm with list-mode data, but that aspect is covered in a separate paper [IEEE Trans. Med. Imaging (to be published)]. An expression for lesion detectability for list-mode data is also derived and compared with the corresponding expression for conventional binned data. PMID:9379247

  8. Dynamic discretization method for solving Kepler's equation

    NASA Astrophysics Data System (ADS)

    Feinstein, Scott A.; McLaughlin, Craig A.

    2006-09-01

    Kepler’s equation needs to be solved many times for a variety of problems in Celestial Mechanics. Therefore, computing the solution to Kepler’s equation in an efficient manner is of great importance to that community. There are some historical and many modern methods that address this problem. Of the methods known to the authors, Fukushima’s discretization technique performs the best. By taking more of a system approach and combining the use of discretization with the standard computer science technique known as dynamic programming, we were able to achieve even better performance than Fukushima. We begin by defining Kepler’s equation for the elliptical case and describe existing solution methods. We then present our dynamic discretization method and show the results of a comparative analysis. This analysis will demonstrate that, for the conditions of our tests, dynamic discretization performs the best.

  9. Dynamic Waypoint Navigation Using Voronoi Classifier Methods

    DTIC Science & Technology

    2004-12-01

    Robotics Mobility Laboratory Warren, MI 48397-5000 ABSTRACT This paper details the development of a dynamic waypoint navigation method ...elements of the environment are known initially and are used in the computation of the initial path). The drawback to this method is that the robot

  10. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    DTIC Science & Technology

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  11. Simulating protein dynamics: Novel methods and applications

    NASA Astrophysics Data System (ADS)

    Vishal, V.

    This Ph.D dissertation describes several methodological advances in molecular dynamics (MD) simulations. Methods like Markov State Models can be used effectively in combination with distributed computing to obtain long time scale behavior from an ensemble of short simulations. Advanced computing architectures like Graphics Processors can be used to greatly extend the scope of MD. Applications of MD techniques to problems like Alzheimer's Disease and fundamental questions in protein dynamics are described.

  12. Automatic T2* determination for quantification of iron load in heart and liver: a comparison between automatic inline Maximum Likelihood Estimate and the truncation and offset methods.

    PubMed

    Hedström, Erik; Voigt, Tobias; Greil, Gerald; Schaeffter, Tobias; Nagel, Eike

    2017-05-01

    To validate ironload T2* by automatic inline Maximum Likelihood Estimate (MLE) with k-space Rician noise correction, against the manual and automated truncation, as well as offset methods, in phantoms and in heart and liver in patients. Twenty-five patients and an iron-oxide phantom were scanned at 1.5T using 2 multi-echo gradient-echo sequences. All parameters were identical (voxel 2-3 × 2-3 × 10 mm(3) , 10 echoes, TR = 26 ms, FA = 20°, BW = 833 Hz, SENSE = 2) except for TE (cardiac: TE1  = 2·5 ms, ΔTE = 2·5 ms; liver: TE1  = 1·2 ms, ΔTE = 1·5 ms). Phantoms were scanned at 1 and 32 signal averages (NSA), with NSA32 representing low-noise reference. Phantoms: MLE showed low variability between NSA1 and NSA32 (0·02 ± 0·29 ms, CI ±0·21 ms). Between methods, no difference was shown (MLE versus all: <0·31 ms, CI < ±0·35 ms). No differences were found between methods in heart (MLE versus all: <-0·22 ms, CI < ±0·75 ms) or liver (MLE versus all: <0·12 ms, CI < ±0·26 ms). The automatic inline MLE method is comparable to the general reference standards for determining cardiac and liver T2* for ironload in man. An automatic inline method may simplify ironload assessment, particularly in centres seeing fewer cases. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  13. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  14. SWECS tower dynamics analysis methods and results

    NASA Technical Reports Server (NTRS)

    Wright, A. D.; Sexton, J. H.; Butterfield, C. P.; Thresher, R. M.

    1981-01-01

    Several different tower dynamics analysis methods and computer codes were used to determine the natural frequencies and mode shapes of both guyed and freestanding wind turbine towers. These analysis methods are described and the results for two types of towers, a guyed tower and a freestanding tower, are shown. The advantages and disadvantages in the use of and the accuracy of each method are also described.

  15. Application of an advanced maximum likelihood estimation restoration method for enhanced-resolution and contrast in second-harmonic generation microscopy.

    PubMed

    Sivaguru, Mayandi; Kabir, Mohammad M; Gartia, Manas Ranjan; Biggs, David S C; Sivaguru, Barghav S; Sivaguru, Vignesh A; Fried, Glenn A; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C

    2017-09-01

    Second-harmonic generation (SHG) microscopy has gained popularity because of its ability to perform submicron, label-free imaging of noncentrosymmetric biological structures, such as fibrillar collagen in the extracellular matrix environment of various organs with high contrast and specificity. Because SHG is a two-photon coherent scattering process, it is difficult to define a point spread function (PSF) for this modality. Hence, compared to incoherent two-photon processes like two-photon fluorescence, it is challenging to apply the various PSF-engineering methods to improve the spatial resolution to be close to the diffraction limit. Using a synthetic PSF and application of an advanced maximum likelihood estimation (AdvMLE) deconvolution algorithm, we demonstrate restoration of the spatial resolution in SHG images to that closer to the theoretical diffraction limit. The AdvMLE algorithm adaptively and iteratively develops a PSF for the supplied image and succeeds in improving the signal to noise ratio (SNR) for images where the SHG signals are derived from various sources such as collagen in tendon and myosin in heart sarcomere. Approximately 3.5 times improvement in SNR is observed for tissue images at depths of up to ∼480 nm, which helps in revealing the underlying helical structures in collagen fibres with an ∼26% improvement in the amplitude contrast in a fibre pitch. Our approach could be adapted to noisy and low resolution modalities such as micro-nano CT and MRI, impacting precision of diagnosis and treatment of human diseases. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  16. Likelihood and clinical trials.

    PubMed

    Hill, G; Forbes, W; Kozak, J; MacNeill, I

    2000-03-01

    The history of the application of statistical theory to the analysis of clinical trials is reviewed. The current orthodoxy is a somewhat illogical hybrid of the original theory of significance tests of Edgeworth, Karl Pearson, and Fisher, and the subsequent decision theory approach of Neyman, Egon Pearson, and Wald. This hegemony is under threat from Bayesian statisticians. A third approach is that of likelihood, stemming from the work of Fisher and Barnard. This approach is illustrated using hypothetical data from the Lancet articles by Bradford Hill, which introduced clinicians to statistical theory.

  17. SPT Lensing Likelihood: South Pole Telescope CMB lensing likelihood code

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Peiris, Hiranya V.; Verde, Licia

    2014-11-01

    The SPT lensing likelihood code, written in Fortran90, performs a Gaussian likelihood based upon the lensing potential power spectrum using a file from CAMB (ascl:1102.026) which contains the normalization required to get the power spectrum that the likelihood call is expecting.

  18. Method for monitoring slow dynamics recovery

    NASA Astrophysics Data System (ADS)

    Haller, Kristian C. E.; Hedberg, Claes M.

    2012-11-01

    Slow Dynamics is a specific material property, which for example is connected to the degree of damage. It is therefore of importance to be able to attain proper measurements of it. Usually it has been monitored by acoustic resonance methods which have very high sensitivity as such. However, because the acoustic wave is acting both as conditioner and as probe, the measurement is affecting the result which leads to a mixing of the fast nonlinear response to the excitation and the slow dynamics material recovery. In this article a method is introduced which, for the first time, removes the fast dynamics from the process and allows the behavior of the slow dynamics to be monitored by itself. The new method has the ability to measure at the shortest possible recovery times, and at very small conditioning strains. For the lowest strains the sound speed increases with strain, while at higher strains a linear decreasing dependence is observed. This is the first method and test that has been able to monitor the true material state recovery process.

  19. Solution Methods for Stochastic Dynamic Linear Programs.

    DTIC Science & Technology

    1980-12-01

    Linear Programming, IIASA , Laxenburg, Austria, June 2-6, 1980. [2] Aghili, P., R.H., Cramer and H.W. Thompson, "On the applicability of two- stage...Laxenburg, Austria, May, 1978. [52] Propoi, A. and V. Krivonozhko, ’The simplex method for dynamic linear programs", RR-78-14, IIASA , Vienna, Austria

  20. DALI: Derivative Approximation for LIkelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena

    2015-07-01

    DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

  1. Interfacial gauge methods for incompressible fluid dynamics

    PubMed Central

    Saye, Robert

    2016-01-01

    Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567

  2. Interfacial gauge methods for incompressible fluid dynamics.

    PubMed

    Saye, Robert

    2016-06-01

    Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.

  3. Interfacial gauge methods for incompressible fluid dynamics

    DOE PAGES

    Saye, R.

    2016-06-10

    Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work,more » high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.« less

  4. Interfacial gauge methods for incompressible fluid dynamics

    SciTech Connect

    Saye, R.

    2016-06-10

    Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.

  5. Residence location and likelihood of kidney transplantation

    PubMed Central

    Tonelli, Marcello; Klarenbach, Scott; Manns, Braden; Culleton, Bruce; Hemmelgarn, Brenda; Bertazzon, Stefania; Wiebe, Natasha; Gill, John S.

    2006-01-01

    Background In a universal, public health care system, access to kidney transplantation should not be influenced by residence location. We determined the likelihood of kidney transplantation from deceased donors among Canadian dialysis patients living in 7 geographic regions. Within each region we also determined whether distance from the closest transplant centre was associated with the likelihood of transplantation. Methods A random sample of 7034 subjects initiating dialysis in Canada between 1996 and 2000 was studied. We used Cox proportional hazards models to examine the relation between residence location and the likelihood of kidney transplantation from deceased donors over a median period of 2.4 years. Results There were significant differences in the likelihood of kidney transplantation from deceased donors and predicted waiting times between the different geographic regions. For example, the adjusted relative likelihood of transplantation in Alberta was 3.74 (95% confidence interval [CI] 2.95–4.76) compared with the likelihood in Ontario (p < 0.001). These differences persisted after further adjustment for differences in the rate of deceased organ donation. Within regions, patients who resided 50.1–150 km, 150.1–300 km and more than 300 km from the closest transplant centre had a similar adjusted likelihood of receiving a kidney transplant as those who lived less than 50 km away. Interpretation The adjusted likelihood of undergoing a kidney transplant from a deceased donor varied substantially between geographic regions in Canada. In contrast, the likelihood of transplantation within regions was not affected by distance from the closest transplant centre. PMID:16940265

  6. Evaluation of Dynamic Methods for Earthwork Assessment

    NASA Astrophysics Data System (ADS)

    Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína

    2015-05-01

    Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  7. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data.

  8. Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Lee, Sik-Yum

    1987-01-01

    Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)

  9. The Sherpa Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  10. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  11. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  12. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  13. A Modified Generalized Likelihood Uncertainty Estimation (GLUE) Methodology: Considering the Subjectivity of Likelihood Function Selection

    NASA Astrophysics Data System (ADS)

    Liu, H.; Zhang, Y.; Houseworth, J.

    2008-12-01

    The Generalized Likelihood Uncertainty Estimation (GLUE) methodology has been widely used in many areas as an effective and general strategy for model calibration and uncertainty estimation associated with complex models. A formal definition of a likelihood function (measure) is needed in the GLUE methodology. However, it has been recognized that the choice of a likelihood measure is inherently subjective, which, in turn, introduces a new kind of uncertainty-the epistemic uncertainty in the GLUE methodology. In this study, we developed a practical framework to address this uncertainty. To apply the GLUE methodology, we propose that multiple likelihood functions be used and results combined based on probability theory. Through an analysis of the probabilities of four infiltration maps at Yucca Mountain, Nevada, we demonstrate (1) it is important to consider the uncertainty caused by the subjectivity of the likelihood selection in the GLUE application; and (2) the proposed method can effectively address this epistemic uncertainty.

  14. Quasi-likelihood for Spatial Point Processes

    PubMed Central

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    2014-01-01

    Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970

  15. Disequilibrium mapping: Composite likelihood for pairwise disequilibrium

    SciTech Connect

    Devlin, B.; Roeder, K.; Risch, N.

    1996-08-15

    The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.

  16. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  17. Comparing Methods for Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Lai, Chok Fung

    2011-01-01

    This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.

  18. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  19. Meshless methods for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Katz, Aaron Jon

    While the generation of meshes has always posed challenges for computational scientists, the problem has become more acute in recent years. Increased computational power has enabled scientists to tackle problems of increasing size and complexity. While algorithms have seen great advances, mesh generation has lagged behind, creating a computational bottleneck. For industry and government looking to impact current and future products with simulation technology, mesh generation imposes great challenges. Many generation procedures often lack automation, requiring many man-hours, which are becoming far more expensive than computer hardware. More automated methods are less reliable for complex geometry with sharp corners, concavity, or otherwise complex features. Most mesh generation methods to date require a great deal of user expertise to obtain accurate simulation results. Since the application of computational methods to real world problems appears to be paced by mesh generation, alleviating this bottleneck potentially impacts an enormous field of problems. Meshless methods applied to computational fluid dynamics is a relatively new area of research designed to help alleviate the burden of mesh generation. Despite their recent inception, there exists no shortage of formulations and algorithms for meshless schemes in the literature. A brief survey of the field reveals varied approaches arising from diverse mathematical backgrounds applied to a wide variety of applications. All meshless schemes attempt to bypass the use of a conventional mesh entirely or in part by discretizing governing partial differential equations on scattered clouds of points. A goal of the present thesis is to develop a meshless scheme for computational fluid dynamics and evaluate its performance compared with conventional methods. The meshless schemes developed in this work compare favorably with conventional finite volume methods in terms of accuracy and efficiency for the Euler and Navier

  20. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  1. Implicit integration methods for dislocation dynamics

    SciTech Connect

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.

  2. Implicit integration methods for dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.

    2015-03-01

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.

  3. Spacecraft Dynamic Characterization by Strain Energies Method

    NASA Astrophysics Data System (ADS)

    Bretagne, J.-M.; Fragnito, M.; Massier, S.

    2002-01-01

    In the last years the significant increase in satellite broadcasting demand, with the wide band communication dawn, has given a great impulse to the telecommunication satellite market. The big demand is translated from operators (such as SES/Astra, Eutelsat, Intelsat, Inmarsat, EuroSkyWay etc.) in an increase of orders of telecom satellite to the world industrials. The largest part of these telecom satellite orders consists of Geostationary platforms which grow more and more in mass (over 5 tons) due to an ever longer demanded lifetime (up to 20 years), and become more complex due to the need of implementing an ever larger number of repeaters, antenna reflectors and feeds, etc... In this frame, the mechanical design and verification of these large spacecraft become difficult and ambitious at the same time, driven by the dry mass limitation objective. By the Finite Element Method (FEM), and on the basis of the telecom satellite heritage of a world leader constructor such as Alcatel Space Industries it is nowadays possible to model these spacecraft in a realistic and confident way in order to identify the main global dynamic aspects such as mode shapes, mass participation and/or dynamic responses. But on the other hand, one of the main aims consists in identifying soon in a program the most critical aspects of the system behavior in the launch dynamic environment, such as possible dynamic coupling between the different subsystems and secondary structures of the spacecraft (large deployable reflectors, thrusters, etc.). To this aim a numerical method has been developed in the frame of the Alcatel SPACEBUS family program, using MSC/Nastran capabilities and it is presented in this paper. The method is based on Spacecraft sub-structuring and strain energy calculation. The method mainly consists of two steps : 1) subsystem modal strain energy ratio (with respect to the global strain energy); 2) subsystem strain energy calculation for each mode according to the base driven

  4. New methods for quantum mechanical reaction dynamics

    SciTech Connect

    Thompson, Ward Hugh

    1996-12-01

    Quantum mechanical methods are developed to describe the dynamics of bimolecular chemical reactions. We focus on developing approaches for directly calculating the desired quantity of interest. Methods for the calculation of single matrix elements of the scattering matrix (S-matrix) and initial state-selected reaction probabilities are presented. This is accomplished by the use of absorbing boundary conditions (ABC) to obtain a localized (L2) representation of the outgoing wave scattering Green`s function. This approach enables the efficient calculation of only a single column of the S-matrix with a proportionate savings in effort over the calculation of the entire S-matrix. Applying this method to the calculation of the initial (or final) state-selected reaction probability, a more averaged quantity, requires even less effort than the state-to-state S-matrix elements. It is shown how the same representation of the Green`s function can be effectively applied to the calculation of negative ion photodetachment intensities. Photodetachment spectroscopy of the anion ABC- can be a very useful method for obtaining detailed information about the neutral ABC potential energy surface, particularly if the ABC- geometry is similar to the transition state of the neutral ABC. Total and arrangement-selected photodetachment spectra are calculated for the H3O- system, providing information about the potential energy surface for the OH + H2 reaction when compared with experimental results. Finally, we present methods for the direct calculation of the thermal rate constant from the flux-position and flux-flux correlation functions. The spirit of transition state theory is invoked by concentrating on the short time dynamics in the area around the transition state that determine reactivity. These methods are made efficient by evaluating the required quantum mechanical trace in the basis of eigenstates of the

  5. Optimization of dynamic systems using collocation methods

    NASA Astrophysics Data System (ADS)

    Holden, Michael Eric

    The time-based simulation is an important tool for the engineer. Often a time-domain simulation is the most expedient to construct, the most capable of handling complex modeling issues, or the most understandable with an engineer's physical intuition. Aeroelastic systems, for example, are often most easily solved with a nonlinear time-based approach to allow the use of high fidelity models. Simulations of automatic flight control systems can also be easier to model in the time domain, especially when nonlinearities are present. Collocation is an optimization method for systems that incorporate a time-domain simulation. Instead of integrating the equations of motion for each design iteration, the optimizer iteratively solves the simulation as it finds the optimal design. This forms a smooth, well-posed, sparse optimization problem, transforming the numerical integration's sequential calculation into a set of constraints that can be evaluated in any order, or even in parallel. The collocation method used in this thesis has been improved from existing techniques in several ways, in particular with a very simple and computationally inexpensive method of applying dynamic constraints, such as damping, that are more traditionally calculated with linear models in the frequency domain. This thesis applies the collocation method to a range of aircraft design problems, from minimizing the weight of a wing with a flutter constraint, to gain-scheduling the stability augmentation system of a small-scale flight control testbed, to aeroservoelastic design of a large aircraft concept. Collocation methods have not been applied to aeroelastic simulations in the past, although the combination of nonlinear aerodynamic analyses with structural dynamics and stability constraints is well-suited to collocation. The results prove the collocation method's worth as a tool for aircraft design, particularly when applied to the multidisciplinary numerical models used today.

  6. Schwarz method for earthquake source dynamics

    SciTech Connect

    Badea, Lori Ionescu, Ioan R. Wolf, Sylvie

    2008-04-01

    Dynamic faulting under slip-dependent friction in a linear elastic domain (in-plane and 3D configurations) is considered. The use of an implicit time-stepping scheme (Newmark method) allows much larger values of the time step than the critical CFL time step, and higher accuracy to handle the non-smoothness of the interface constitutive law (slip weakening friction). The finite element form of the quasi-variational inequality is solved by a Schwarz domain decomposition method, by separating the inner nodes of the domain from the nodes on the fault. In this way, the quasi-variational inequality splits into two subproblems. The first one is a large linear system of equations, and its unknowns are related to the mesh nodes of the first subdomain (i.e. lying inside the domain). The unknowns of the second subproblem are the degrees of freedom of the mesh nodes of the second subdomain (i.e. lying on the domain boundary where the conditions of contact and friction are imposed). This nonlinear subproblem is solved by the same Schwarz algorithm, leading to some local nonlinear subproblems of a very small size. Numerical experiments are performed to illustrate convergence in time and space, instability capturing, energy dissipation and the influence of normal stress variations. We have used the proposed numerical method to compute source dynamics phenomena on complex and realistic 2D fault models (branched fault systems)

  7. Dynamic data filtering system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-04-29

    A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.

  8. On methods for studying stochastic disease dynamics

    PubMed Central

    Keeling, M.J; Ross, J.V

    2007-01-01

    Models that deal with the individual level of populations have shown the importance of stochasticity in ecology, epidemiology and evolution. An increasingly common approach to studying these models is through stochastic (event-driven) simulation. One striking disadvantage of this approach is the need for a large number of replicates to determine the range of expected behaviour. Here, for a class of stochastic models called Markov processes, we present results that overcome this difficulty and provide valuable insights, but which have been largely ignored by applied researchers. For these models, the so-called Kolmogorov forward equation (also called the ensemble or master equation) allows one to simultaneously consider the probability of each possible state occurring. Irrespective of the complexities and nonlinearities of population dynamics, this equation is linear and has a natural matrix formulation that provides many analytical insights into the behaviour of stochastic populations and allows rapid evaluation of process dynamics. Here, using epidemiological models as a template, these ensemble equations are explored and results are compared with traditional stochastic simulations. In addition, we describe further advantages of the matrix formulation of dynamics, providing simple exact methods for evaluating expected eradication (extinction) times of diseases, for comparing expected total costs of possible control programmes and for estimation of disease parameters. PMID:17638650

  9. On methods for studying stochastic disease dynamics.

    PubMed

    Keeling, M J; Ross, J V

    2008-02-06

    Models that deal with the individual level of populations have shown the importance of stochasticity in ecology, epidemiology and evolution. An increasingly common approach to studying these models is through stochastic (event-driven) simulation. One striking disadvantage of this approach is the need for a large number of replicates to determine the range of expected behaviour. Here, for a class of stochastic models called Markov processes, we present results that overcome this difficulty and provide valuable insights, but which have been largely ignored by applied researchers. For these models, the so-called Kolmogorov forward equation (also called the ensemble or master equation) allows one to simultaneously consider the probability of each possible state occurring. Irrespective of the complexities and nonlinearities of population dynamics, this equation is linear and has a natural matrix formulation that provides many analytical insights into the behaviour of stochastic populations and allows rapid evaluation of process dynamics. Here, using epidemiological models as a template, these ensemble equations are explored and results are compared with traditional stochastic simulations. In addition, we describe further advantages of the matrix formulation of dynamics, providing simple exact methods for evaluating expected eradication (extinction) times of diseases, for comparing expected total costs of possible control programmes and for estimation of disease parameters.

  10. Direct anharmonic correction method by molecular dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang

    2017-04-01

    The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.

  11. Concurrent DSMC Method Using Dynamic Domain Decomposition

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.

    2003-05-01

    In the current study, a parallel two-dimensional direct simulation Monte Carlo method is reported, which incorporates a multi-level graph-partitioning technique to dynamically decompose the computational domain. The current DSMC method is implemented on an unstructured mesh using particle ray-tracing technique, which takes the advantages of the cell connectivity information. Standard Message Passage Interface (MPI) is used to communicate data between processors. In addition, different strategies applying the Stop at Rise (SAR) [7] scheme is utilized to determine when to adapt the workload distribution among processors. Corresponding analysis of parallel performance is reported using the results of a high-speed driven cavity flow on IBM-SP2 parallel machines (memory-distributed, CPU 160 MHz, RAM 256 MB each) up to 64 processors. Small, medium and large problems, based on the number of particles and cells, are simulated. Results, applying SAR scheme every two time steps, show that parallel efficiency is 57%, 90% and 107% for small, medium and large problems, respectively, at 64 processors. In general, benefits of applying SAR scheme at larger periods decrease gradually with increasing problem size. Detailed time analysis shows that degree of imbalance levels off very rapidly at a relatively low value (30%˜40%) with increasing number of processors applying dynamic load balancing, while it, at a value of 5˜6 times larger, increases with increasing number of processors without dynamic load balancing. At the end, the completed code is applied to compute a near-continuum gas flow to demonstrate its superior computational capability.

  12. Semiclassical methods of non-adiabatic dynamics

    NASA Astrophysics Data System (ADS)

    Bonella, Sara

    A new method to study how the combined effect of electronic transitions and nuclear motion (non-adiabatic dynamics) influences the properties of molecular systems is presented in this dissertation. Building on previous ideas, a computer simulation technique that combines numerical efficiency with an accurate representation of the quantum aspects of the non-adiabatic evolution is developed. The new method, called Focusing, is tested against exact quantum calculations and standard approximate techniques currently in use in a series of calculations on benchmark problems of growing complexity. Focusing is consistently able to reproduce the exact result while reducing by two orders of magnitude the numerical effort necessary to achieve similar accuracy with the alternative methods. The improved efficiency is achieved by combining two well-controlled approximations. First, a theoretical analysis is performed to represent the quantum evolution, usually too complex to be calculated exactly, in terms of an ensemble of classical trajectories that can be obtained with well established algorithms. The resulting new semiclassical propagator, though closely related to known results, overcomes a serious limitation which prevented the application of this earlier work to realistic molecular systems. Secondly, the efficiency of the semiclassical simulation is optimized by taking advantage of the mathematical structure of the non-adiabatic propagator and through a steepest descent analysis identifying a priori the subset of trajectories that produce the most important contributions to the dynamics. A variation of the new method, called Refocusing, is shown to improve the situation even further for an interesting subset of non-adiabatic problems in which nuclear evolution takes place on different electronic surfaces which are very dissimilar from one another. The quality of the agreement and the gain in efficiency are maintained when studying the non-adiabatic dynamics of a

  13. Growing local likelihood network: Emergence of communities

    NASA Astrophysics Data System (ADS)

    Chen, S.; Small, M.

    2015-10-01

    In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.

  14. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  15. Modelling thrombosis using dissipative particle dynamics method

    PubMed Central

    Filipovic, N; Kojic, M; Tsuda, A

    2008-01-01

    Aim. Arterial occlusion is a leading cause of cardiovascular disease. The main mechanism causing vessel occlusion is thrombus formation, which may be initiated by the activation of platelets. The focus of this study is on the mechanical aspects of platelet-mediated thrombosis which includes the motion, collision, adhesion and aggregation of activated platelets in the blood. A review of the existing continuum-based models is given. A mechanical model of platelet accumulation onto the vessel wall is developed using the dissipative particle dynamics (DPD) method in which the blood (i.e. colloidal-composed medium) is treated as a group of mesoscale particles interacting through conservative, dissipative, attractive and random forces. Methods. Colloidal fluid components (plasma and platelets) are discretized by mesoscopic (micrometre-size) particles that move according to Newton's law. The size of each mesoscopic particle is small enough to allow tracking of each constituent of the colloidal fluid, but significantly larger than the size of atoms such that, in contrast to the molecular dynamics approach, detailed atomic level analysis is not required. Results. To test this model, we simulated the deposition of platelets onto the wall of an expanded tube and compared our computed results with the experimental data of Karino et al. (Miscrovasc. Res. 17, 238–269, 1977). By matching our simulations to the experimental results, the platelet aggregation/adhesion binding force (characterized by an effective spring constant) was determined and found to be within a physiologically reasonable range. Conclusion. Our results suggest that the DPD method offers a promising new approach to the modelling of platelet-mediated thrombosis. The DPD model includes interaction forces between platelets both when they are in the resting state (non-activated) and when they are activated, and therefore it can be extended to the analysis of kinetics of binding and other phenomena relevant to

  16. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution

    PubMed Central

    Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840

  17. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.

    PubMed

    Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-07-18

    The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.

  18. NMR Methods to Study Dynamic Allostery.

    PubMed

    Grutsch, Sarina; Brüschweiler, Sven; Tollinger, Martin

    2016-03-01

    Nuclear magnetic resonance (NMR) spectroscopy provides a unique toolbox of experimental probes for studying dynamic processes on a wide range of timescales, ranging from picoseconds to milliseconds and beyond. Along with NMR hardware developments, recent methodological advancements have enabled the characterization of allosteric proteins at unprecedented detail, revealing intriguing aspects of allosteric mechanisms and increasing the proportion of the conformational ensemble that can be observed by experiment. Here, we present an overview of NMR spectroscopic methods for characterizing equilibrium fluctuations in free and bound states of allosteric proteins that have been most influential in the field. By combining NMR experimental approaches with molecular simulations, atomistic-level descriptions of the mechanisms by which allosteric phenomena take place are now within reach.

  19. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  20. Methods and systems for combustion dynamics reduction

    DOEpatents

    Kraemer, Gilbert Otto; Varatharajan, Balachandar; Srinivasan, Shiva; Lynch, John Joseph; Yilmaz, Ertan; Kim, Kwanwoo; Lacy, Benjamin; Crothers, Sarah; Singh, Kapil Kumar

    2009-08-25

    Methods and systems for combustion dynamics reduction are provided. A combustion chamber may include a first premixer and a second premixer. Each premixer may include at least one fuel injector, at least one air inlet duct, and at least one vane pack for at least partially mixing the air from the air inlet duct or ducts and fuel from the fuel injector or injectors. Each vane pack may include a plurality of fuel orifices through which at least a portion of the fuel and at least a portion of the air may pass. The vane pack or packs of the first premixer may be positioned at a first axial position and the vane pack or packs of the second premixer may be positioned at a second axial position axially staggered with respect to the first axial position.

  1. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  2. NMR Methods to Study Dynamic Allostery

    PubMed Central

    Grutsch, Sarina; Brüschweiler, Sven; Tollinger, Martin

    2016-01-01

    Nuclear magnetic resonance (NMR) spectroscopy provides a unique toolbox of experimental probes for studying dynamic processes on a wide range of timescales, ranging from picoseconds to milliseconds and beyond. Along with NMR hardware developments, recent methodological advancements have enabled the characterization of allosteric proteins at unprecedented detail, revealing intriguing aspects of allosteric mechanisms and increasing the proportion of the conformational ensemble that can be observed by experiment. Here, we present an overview of NMR spectroscopic methods for characterizing equilibrium fluctuations in free and bound states of allosteric proteins that have been most influential in the field. By combining NMR experimental approaches with molecular simulations, atomistic-level descriptions of the mechanisms by which allosteric phenomena take place are now within reach. PMID:26964042

  3. Likelihood analysis of earthquake focal mechanism distributions

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2015-06-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional likelihood method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (likelihood) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the likelihood score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of likelihood scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.

  4. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  5. Applications of Langevin and Molecular Dynamics methods

    NASA Astrophysics Data System (ADS)

    Lomdahl, P. S.

    Computer simulation of complex nonlinear and disordered phenomena from materials science is rapidly becoming an active and new area serving as a guide for experiments and for testing of theoretical concepts. This is especially true when novel massively parallel computer systems and techniques are used on these problems. In particular the Langevin dynamics simulation technique has proven useful in situations where the time evolution of a system in contact with a heat bath is to be studied. The traditional way to study systems in contact with a heat bath has been via the Monte Carlo method. While this method has indeed been used successfully in many applications, it has difficulty addressing true dynamical questions. Large systems of coupled stochastic ODE's (or Langevin equations) are commonly the end result of a theoretical description of higher dimensional nonlinear systems in contact with a heat bath. The coupling is often local in nature, because it reflects local interactions formulated on a lattice, the lattice for example represents the underlying discreteness of a substrate of atoms or discrete k-values in Fourier space. The fundamental unit of parallelism thus has a direct analog in the physical system the authors are interested in. In these lecture notes the authors illustrate the use of Langevin stochastic simulation techniques on a number of nonlinear problems from materials science and condensed matter physics that have attracted attention in recent years. First, the authors review the idea behind the fluctuation-dissipation theorem which forms that basis for the numerical Langevin stochastic simulation scheme. The authors then show applications of the technique to various problems from condensed matter and materials science.

  6. Factors Influencing Likelihood of Voice Therapy Attendance.

    PubMed

    Misono, Stephanie; Marmor, Schelomo; Roy, Nelson; Mau, Ted; Cohen, Seth M

    2017-03-01

    Objective To identify factors associated with the likelihood of attending voice therapy among patients referred for it in the CHEER (Creating Healthcare Excellence through Education and Research) practice-based research network infrastructure. Study Design Prospectively enrolled cross-sectional study. Setting CHEER network of community and academic sites. Methods Data were collected on patient-reported demographics, voice-related diagnoses, voice-related handicap (Voice Handicap Index-10), likelihood of attending voice therapy (VT), and opinions on factors influencing likelihood of attending VT. The relationships between patient characteristics/opinions and likelihood of attending VT were investigated. Results A total of 170 patients with various voice-related diagnoses reported receiving a recommendation for VT. Of those, 85% indicated that they were likely to attend it, regardless of voice-related handicap severity. The most common factors influencing likelihood of VT attendance were insurance/copay, relief that it was not cancer, and travel. Those who were not likely to attend VT identified, as important factors, unclear potential improvement, not understanding the purpose of therapy, and concern that it would be too hard. In multivariate analysis, factors associated with greater likelihood of attending VT included shorter travel distance, age (40-59 years), and being seen in an academic practice. Conclusions Most patients reported plans to attend VT as recommended. Patients who intended to attend VT reported different considerations in their decision making from those who did not plan to attend. These findings may inform patient counseling and efforts to increase access to voice care.

  7. Integration based profile likelihood calculation for PDE constrained parameter estimation problems

    NASA Astrophysics Data System (ADS)

    Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.

    2016-12-01

    Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.

  8. Semiclassical methods in chemical reaction dynamics

    SciTech Connect

    Keshavamurthy, Srihari

    1994-12-01

    Semiclassical approximations, simple as well as rigorous, are formulated in order to be able to describe gas phase chemical reactions in large systems. We formulate a simple but accurate semiclassical model for incorporating multidimensional tunneling in classical trajectory simulations. This model is based on the existence of locally conserved actions around the saddle point region on a multidimensional potential energy surface. Using classical perturbation theory and monitoring the imaginary action as a function of time along a classical trajectory we calculate state-specific unimolecular decay rates for a model two dimensional potential with coupling. Results are in good comparison with exact quantum results for the potential over a wide range of coupling constants. We propose a new semiclassical hybrid method to calculate state-to-state S-matrix elements for bimolecular reactive scattering. The accuracy of the Van Vleck-Gutzwiller propagator and the short time dynamics of the system make this method self-consistent and accurate. We also go beyond the stationary phase approximation by doing the resulting integrals exactly (numerically). As a result, classically forbidden probabilties are calculated with purely real time classical trajectories within this approach. Application to the one dimensional Eckart barrier demonstrates the accuracy of this approach. Successful application of the semiclassical hybrid approach to collinear reactive scattering is prevented by the phenomenon of chaotic scattering. The modified Filinov approach to evaluating the integrals is discussed, but application to collinear systems requires a more careful analysis. In three and higher dimensional scattering systems, chaotic scattering is suppressed and hence the accuracy and usefulness of the semiclassical method should be tested for such systems.

  9. Dynamic stiffness method for space frames under distributed harmonic loads

    NASA Astrophysics Data System (ADS)

    Dumir, P. C.; Saha, D. C.; Sengupta, S.

    1992-10-01

    An exact dynamic equivalent load vector for space frames subjected to harmonic distributed loads has been derived using the dynamic stiffness approach. The Taylor's series expansion of the dynamic equivalent load vector has revealed that the static consistent equivalent load vector used in a 12 degree of freedom two-noded finite element for a space frame is just the first term of the series. The dynamic stiffness approach using the exact dynamic equivalent load vector requires discretization of a member subjected to distributed loads into only one element. The results of the dynamic stiffness method are compared with those of the finite element method for illustrative problems.

  10. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  11. On the likelihood of forests

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  12. Determination of stability and control derivatives from the NASA F/A-18 HARV from flight data using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1995-01-01

    This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.

  13. Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

    PubMed Central

    García-Cortés, Louis Alberto; Sorensen, Daniel

    2001-01-01

    Two methods of computing Monte Carlo estimators of variance components using restricted maximum likelihood via the expectation-maximisation algorithm are reviewed. A third approach is suggested and the performance of the methods is compared using simulated data. PMID:11559486

  14. A Dynamic Management Method for Fast Manufacturing Resource Reconfiguration

    NASA Astrophysics Data System (ADS)

    Yuan, Zhiye

    To fast and optimally reconfigure manufacturing resource, a dynamic management method for fast manufacturing resource reconfiguration based on holon was proposed. In this method, a dynamic management structure for fast manufacturing resource reconfiguration was established based on holon. Moreover, the cooperation relationship among holons for fast manufacturing resource reconfiguration and the manufacturing information cooperation mechanism based on holonic were constructed. Finally, the simulation system of a dynamic management method for fast manufacturing resource reconfiguration was demonstrated and validated by Flexsim software. It has shown the proposed method can dynamically and optimally reconfigure manufacturing resource, and it can effectively improve the efficiency of manufacturing processes.

  15. Likelihood reinstates Archaeopteryx as a primitive bird.

    PubMed

    Lee, Michael S Y; Worthy, Trevor H

    2012-04-23

    The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx-deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx-bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics.

  16. System and Method for Dynamic Aeroelastic Control

    NASA Technical Reports Server (NTRS)

    Suh, Peter M. (Inventor)

    2015-01-01

    The present invention proposes a hardware and software architecture for dynamic modal structural monitoring that uses a robust modal filter to monitor a potentially very large-scale array of sensors in real time, and tolerant of asymmetric sensor noise and sensor failures, to achieve aircraft performance optimization such as minimizing aircraft flutter, drag and maximizing fuel efficiency.

  17. Dynamic Programming Method for Impulsive Control Problems

    ERIC Educational Resources Information Center

    Balkew, Teshome Mogessie

    2015-01-01

    In many control systems changes in the dynamics occur unexpectedly or are applied by a controller as needed. The time at which a controller implements changes is not necessarily known a priori. For example, many manufacturing systems and flight operations have complicated control systems, and changes in the control systems may be automatically…

  18. Dynamic Programming Method for Impulsive Control Problems

    ERIC Educational Resources Information Center

    Balkew, Teshome Mogessie

    2015-01-01

    In many control systems changes in the dynamics occur unexpectedly or are applied by a controller as needed. The time at which a controller implements changes is not necessarily known a priori. For example, many manufacturing systems and flight operations have complicated control systems, and changes in the control systems may be automatically…

  19. PACO: PArticle COunting Method To Enforce Concentrations in Dynamic Simulations.

    PubMed

    Berti, Claudio; Furini, Simone; Gillespie, Dirk

    2016-03-08

    We present PACO, a computationally efficient method for concentration boundary conditions in nonequilibrium particle simulations. Because it requires only particle counting, its computational effort is significantly smaller than other methods. PACO enables Brownian dynamics simulations of micromolar electrolytes (3 orders of magnitude lower than previously simulated). PACO for Brownian dynamics is integrated in the BROWNIES package (www.phys.rush.edu/BROWNIES). We also introduce a molecular dynamics PACO implementation that allows for very accurate control of concentration gradients.

  20. Factors Associated with Young Adults’ Pregnancy Likelihood

    PubMed Central

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  1. Nonparametric Bayes Factors Based On Empirical Likelihood Ratios

    PubMed Central

    Vexler, Albert; Deng, Wei; Wilding, Gregory E.

    2012-01-01

    Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this article, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass & Wasserman [10], we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF’s asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test. PMID:23180904

  2. Three dimensional measurement using likelihood function by multi-camera

    NASA Astrophysics Data System (ADS)

    Shimizu, T.

    2017-03-01

    This paper describes three-dimensional shape measurements by a multi-camera system and the correction of the measured points. First, a geometrical model is constructed using the positional relationship of the calibrated multi-camera. Because the cameras are located linearly on a thin plate whose curvature is changed by an actuator, Zhang's method is employed in the calibration. Second, a three-dimensional object is measured on an epipolar plane. DP (Dynamic Programming) matching is used to determine the corresponding points and SSD (Sum of Squared Difference) is used as the local area windows. Third, the correlation coefficient is employed to determine the likelihood of the three-dimensional points, which are then set to correct and modify the measured points. Consequently, the measurement results more closely resemble the shape of the measured object.

  3. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  4. Weibull distribution based on maximum likelihood with interval inspection data

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.

    1985-01-01

    The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.

  5. Properties of maximum likelihood male fertility estimation in plant populations.

    PubMed Central

    Morgan, M T

    1998-01-01

    Computer simulations are used to evaluate maximum likelihood methods for inferring male fertility in plant populations. The maximum likelihood method can provide substantial power to characterize male fertilities at the population level. Results emphasize, however, the importance of adequate experimental design and evaluation of fertility estimates, as well as limitations to inference (e.g., about the variance in male fertility or the correlation between fertility and phenotypic trait value) that can be reasonably drawn. PMID:9611217

  6. Approximate likelihood for large irregularly spaced spatial data

    PubMed Central

    Fuentes, Montserrat

    2008-01-01

    SUMMARY Likelihood approaches for large irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the likelihood for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle’s approximation to the Gaussian log likelihood for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial likelihood approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to estimate the spatial structure of sea surface temperatures (SST) using satellite data with missing values. PMID:19079638

  7. Numerical methods for molecular dynamics. Progress report

    SciTech Connect

    Skeel, R.D.

    1991-12-31

    This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.

  8. Dynamic decoupling nonlinear control method for aircraft gust alleviation

    NASA Astrophysics Data System (ADS)

    Lv, Yang; Wan, Xiaopeng; Li, Aijun

    2008-10-01

    A dynamic decoupling nonlinear control method for MIMO system is presented in this paper. The dynamic inversion method is used to decouple the multivariable system. The nonlinear control method is used to overcome the poor decoupling effect when the system model is inaccurate. The nonlinear control method has correcting function and is expressed in analytic form, it is easy to adjust the parameters of the controller and optimize the design of the control system. The method is used to design vertical transition mode of active control aircraft for gust alleviation. Simulation results show that the designed vertical transition mode improves the gust alleviation effect about 34% comparing with the normal aircraft.

  9. Prediction of Dynamic Stall Characteristics Using Advanced Nonlinear Panel Methods,

    DTIC Science & Technology

    This paper presents preliminary results of work in which a surface singularity panel method is being extended for modelling the dynamic interaction...between a separated wake and a surface undergoing an unsteady motion. The method combines the capabilities of an unsteady time-stepping code and a... technique for modelling extensive separation using free vortex sheets. Routines are developed for treating the dynamic interaction between the separated

  10. Recovering Velocity Distributions Via Penalized Likelihood

    NASA Astrophysics Data System (ADS)

    Merritt, David

    1997-07-01

    Line-of-sight velocity distributions are crucial for unravelling the dynamics of hot stellar systems. We present a new formalism based on penalized likelihood for deriving such distributions from kinematical data, and evaluate the performance of two algorithms that extract N(V) from absorption-line spectra and from sets of individual velocities. Both algorithms are superior to existing ones in that the solutions are nearly unbiased even when the data are so poor that a great deal of smoothing is required. In addition, the discrete-velocity algorithm is able to remove a known distribution of measurement errors from the estimate of N(V). The formalism is used to recover the velocity distribution of stars in five fields near the center of the globular cluster omega Centauri.

  11. Section 9: Ground Water - Likelihood of Release

    EPA Pesticide Factsheets

    HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.

  12. Dynamic baseline detection method for power data network service

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2017-08-01

    This paper proposes a dynamic baseline Traffic detection Method which is based on the historical traffic data for the Power data network. The method uses Cisco's NetFlow acquisition tool to collect the original historical traffic data from network element at fixed intervals. This method uses three dimensions information including the communication port, time, traffic (number of bytes or number of packets) t. By filtering, removing the deviation value, calculating the dynamic baseline value, comparing the actual value with the baseline value, the method can detect whether the current network traffic is abnormal.

  13. Dynamics of Collisions Revealed by Classical Methods

    SciTech Connect

    Olson, R E; Fiol, J; Perez, J; Beiersdorfer, P

    2004-10-01

    Single ionization of helium by impact of 3.6 MeV/u Au{sup 53+} ions is investigated by means of quantum and classical methods. Calculation of fully-differential cross sections are compared to recently published data for ionization of low-energy electrons as a function of the momentum transferred by the projectile to the target system. The results show that inclusion of the resolution and uncertainties present in the experiment has a major influence on both the shape and magnitude of the calculated cross sections. The effect of using a two-electron model for the He target along with including electron-electron correlation are also investigated. However, after incorporating all the experimental conditions within the calculations, the one- and two-electron results present similar behavior.

  14. Fast multipole methods for particle dynamics

    PubMed Central

    Kurzak, J.; Pettitt, B. M.

    2008-01-01

    The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems. PMID:19194526

  15. CosmoSlik: Cosmology sampler of likelihoods

    NASA Astrophysics Data System (ADS)

    Millea, Marius

    2017-01-01

    CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.

  16. Dynamic characteristics of a WPC—comparison of transfer matrix method and FE method

    NASA Astrophysics Data System (ADS)

    Chen, Guo-Long; Nie, Wu

    2003-12-01

    To find the difference in dynamic characteristics between conventional monohull ship and wave penetrating catamaran (WPC), a WPC was taken as an object; its dynamic characteristics were computed by transfer matrix method and finite element method respectively. According to the comparison of the nature frequency results and mode shape results, the fact that FEM method is more suitable to dynamic characteristics analysis of a WPC was pointed out, special features on dynamic characteristics of WPC were given, and some beneficial suggestions are proposed to optimize the strength of a WPC in design period.

  17. Dynamic force matching: A method for constructing dynamical coarse-grained models with realistic time dependence

    NASA Astrophysics Data System (ADS)

    Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.

    2015-04-01

    Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that

  18. Dynamic force matching: A method for constructing dynamical coarse-grained models with realistic time dependence

    SciTech Connect

    Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.

    2015-04-21

    Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that

  19. Maximum Likelihood Estimation of Population Parameters

    PubMed Central

    Fu, Y. X.; Li, W. H.

    1993-01-01

    One of the most important parameters in population genetics is θ = 4N(e)μ where N(e) is the effective population size and μ is the rate of mutation per gene per generation. We study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter θ over existing methods and the other is the estimation of parameter λ which is the ratio of two θ's. The minimum variances of estimates of the parameter θ are derived under two idealized situations. These minimum variances serve as the lower bounds of the variances of all possible estimates of θ in practice. We then show that Watterson's estimate of θ based on the number of segregating sites is asymptotically an optimal estimate of θ. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when θ is large. The maximum likelihood estimate of λ = θ(1)/θ(2) is obtained and the properties of the estimate are discussed. PMID:8375660

  20. Maximum likelihood estimation of population parameters

    SciTech Connect

    Fu, Y.X.; Li, W.H. )

    1993-08-01

    One of the most important parameters in population genetics is [theta] = 4N[sub e][mu] where N[sub e] is the effective population size and [mu] is the rate of mutation per gene per generation. The authors study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter [theta] over existing methods and the other is the estimation of parameter [lambda] which is the ratio of two [theta]'s. The minimum variances serve as the lower bounds of the variances of all possible estimates of [theta] in practice. The authors then show that Watterson's estimate of [theta] based on the number of segregating sites is asymptotically an optimal estimate of [theta]. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when [theta] is large. The maximum likelihood estimate of [lambda] = [theta][sub 1]/[theta][sub 2] is obtained and the properties of the estimate are discussed. 9 refs., 3 figs., 3 tabs.

  1. Robust Dynamic Multi-objective Vehicle Routing Optimization Method.

    PubMed

    Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei

    2017-03-21

    For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.

  2. Model-free linkage analysis using likelihoods

    SciTech Connect

    Curtis, D.; Sham, P.C.

    1995-09-01

    Misspecification of transmission model parameters can produce artifactually lod scores at small recombination fractions and in multipoint analysis. To avoid this problem, we have tried to devise a test that aims to detect a genetic effect at a particular locus, rather than attempting to estimate the map position of a locus with specified effect. Maximizing likelihoods over transmission model parameters, as well as linkage parameters, can produce seriously biased parameter estimates and so yield tests that lack power for the detection of linkage. However, constraining the transmission model parameters to produce the correct population prevalence largely avoids this problem. For computational convenience, we recommend that the likelihoods under linkage and nonlinkage are independently maximized over a limited set of transmission models, ranging from Mendelian dominant to null effect and from null effect to Mendelian recessive. In order to test for a genetic effect at a given map position, the likelihood under linkage is maximized over admixture, the proportion of families linked. Application to simulated data for a wide range of transmission models in both affected sib pairs and pedigrees demonstrates that the new method is well behaved under the null hypothesis and provides a powerful test for linkage when it is present. This test requires no specification of transmission model parameters, apart from an approximate estimate of the population prevalence. It can be applied equally to sib pairs and pedigrees, and, since it does not diminish the lod score at test positions very close to a marker, it is suitable for application to multipoint data. 24 refs., 1 fig., 4 tabs.

  3. A hybrid likelihood algorithm for risk modelling.

    PubMed

    Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D

    1995-03-01

    The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general.

  4. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  5. A review of substructure coupling methods for dynamic analysis

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chang, C. J.

    1976-01-01

    The state of the art is assessed in substructure coupling for dynamic analysis. A general formulation, which permits all previously described methods to be characterized by a few constituent matrices, is developed. Limited results comparing the accuracy of various methods are presented.

  6. PHRAPL: Phylogeographic Inference Using Approximate Likelihoods.

    PubMed

    Jackson, Nathon D; Morales, Ariadna E; Carstens, Bryan C; O'Meara, Brian C

    2017-02-16

    The demographic history of most species is complex, with multiple evolutionary processes combining to shape the observed patterns of genetic diversity. To infer this history, the discipline of phylogeography has (to date) used models that simplify the historical demography of the focal organism, for example by assuming or ignoring ongoing gene flow between populations or by requiring a priori specification of divergence history. Since no single model incorporates every possible evolutionary process, researchers rely on intuition to choose the models that they use to analyze their data. Here, we describe an approximate likelihood approach that reduces this reliance on intuition. PHRAPL allows users to calculate the probability of a large number of complex demographic histories given a set of gene trees, enabling them to identify the most likely underlying model and estimate parameters for a given system. Available model parameters include coalescence time among populations or species, gene flow, and population size. We describe the method and test its performance in model selection and parameter estimation using simulated data. We also compare model probabilities estimated using our approximate likelihood method to those obtained using standard analytical likelihood. The method performs well under a wide range of scenarios, although this is sometimes contingent on sampling many loci. In most scenarios, as long as there are enough loci and if divergence among populations is sufficiently deep, PHRAPL can return the true model in nearly all simulated replicates. Parameter estimates from the method are also generally accurate in most cases. PHRAPL is a valuable new method for phylogeographic model selection and will be particularly useful as a tool to more extensively explore demographic model space than is typically done or to estimate parameters for complex models that are not readily implemented using current methods. Estimating relevant parameters using the most

  7. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  8. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  9. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function.

  10. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  11. An inverse dynamic method yielding flexible manipulator state trajectories

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1990-01-01

    An inverse dynamic equation for a flexible manipulator is derived in a state form. By dividing the inverse system into the causal part and the anticausal part, torque is calculated in the time domain for a certain end point trajectory, as well as trajectories of all state variables. The open loop control of the inverse dynamic method shows an excellent result in simulation. For practical applications, a control strategy adapting feedback tracking control to the inverse dynamic feedforward control is illustrated, and its good experimental result is presented.

  12. An inverse dynamic method yielding flexible manipulator state trajectories

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1990-01-01

    An inverse dynamic equation for a flexible manipulator is derived in a state form. By dividing the inverse system into the causal part and the anticausal part, one can calculate torque in the time domain for a certain end-point trajectory, as well as trajectories of all state variables. The open-loop control of the inverse dynamic method shows an excellent result in simulation. For practical applications, a control strategy adapting feedback tracking control to the inverse dynamic feedforward control is illustrated, and experimental results are presented.

  13. Constraint likelihood analysis for a network of gravitational wave detectors

    SciTech Connect

    Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.

    2005-12-15

    We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method.

  14. Application of AN Asymptotic Method to Transient Dynamic Problems

    NASA Astrophysics Data System (ADS)

    Fafard, M.; Henchi, K.; Gendron, G.; Ammar, S.

    1997-11-01

    A new method to solve linear dynamics problems using an asymptotic method is presented. Asymptotic methods have been efficiently used for many decades to solve non-linear quasistatic structural problems. Generally, structural dynamics problems are solved using finite elements for the discretization of the space domain of the differential equations, and explicit or implicit schemes for the time domain. With the asymptotic method, time schemes are not necessary to solve the discretized (space) equations. Using the analytical solution of a single degree of freedom (DOF) problem, it is demonstrated, that the Dynamic Asymptotic Method (DAM) converges to the exact solution when an infinite series expansion is used. The stability of the method has been studied. DAM is conditionally stable for a finite series expansion and unconditionally stable for an infinite series expansion. This method is similar to the analytical method of undetermined coefficients or to power series method being used to solve ordinary differential equations. For a multi-degree-of-freedom (MDOF) problem with a lumped mass matrix, no factorization or explicit inversion of global matrices is necessary. It is shown that this conditionally stable method is more efficient than other conditionally stable explicit central difference integration techniques. The solution is continuous irrespective of the time segment (step) and the derivatives are continuous up to orderN-1 whereNis the order of the series expansion.

  15. LIKEDM: Likelihood calculator of dark matter detection

    NASA Astrophysics Data System (ADS)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  16. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    PubMed Central

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  17. A dynamic integrated fault diagnosis method for power transformers.

    PubMed

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  18. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  19. Method for recovering dynamic position of photoelectric encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yong-zhi; Wan, Qiu-hua; Zhao, Chang-hai; Sun, Ying; Liang, Li-hui; Liu, Yi-sheng

    2009-05-01

    This paper presents a method to recover dynamic position of photoelectric encoder. While working at dynamic state, original outputs of the photoelectric encoder are in theory two sine or triangular signals with a phase difference of π/2. However, there are still deviations of actual output signals. Interpolating on the basis of this deviation signal will result in interpolation errors. In dynamic state, true original signal data obtained by data acquisition system is a time equation. Through processing these data by data equiangular and harmonic analysis, the equation will be converted from time domain to position domain and an original position equation can be formed. Then the interpolation errors also can be obtained. By this method, the interpolation errors can be checked in dynamic state and it can also provide electric interpolation basis so that to improve dynamic interpolation precision of the encoder. Software simulation and experimental analysis all prove the method effective. This method is the basis in theory for precision checking and calibration in motion.

  20. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  1. Can the ring polymer molecular dynamics method be interpreted as real time quantum dynamics?

    SciTech Connect

    Jang, Seogjoo; Sinitskiy, Anton V.; Voth, Gregory A.

    2014-04-21

    The ring polymer molecular dynamics (RPMD) method has gained popularity in recent years as a simple approximation for calculating real time quantum correlation functions in condensed media. However, the extent to which RPMD captures real dynamical quantum effects and why it fails under certain situations have not been clearly understood. Addressing this issue has been difficult in the absence of a genuine justification for the RPMD algorithm starting from the quantum Liouville equation. To this end, a new and exact path integral formalism for the calculation of real time quantum correlation functions is presented in this work, which can serve as a rigorous foundation for the analysis of the RPMD method as well as providing an alternative derivation of the well established centroid molecular dynamics method. The new formalism utilizes the cyclic symmetry of the imaginary time path integral in the most general sense and enables the expression of Kubo-transformed quantum time correlation functions as that of physical observables pre-averaged over the imaginary time path. Upon filtering with a centroid constraint function, the formulation results in the centroid dynamics formalism. Upon filtering with the position representation of the imaginary time path integral, we obtain an exact quantum dynamics formalism involving the same variables as the RPMD method. The analysis of the RPMD approximation based on this approach clarifies that an explicit quantum dynamical justification does not exist for the use of the ring polymer harmonic potential term (imaginary time kinetic energy) as implemented in the RPMD method. It is analyzed why this can cause substantial errors in nonlinear correlation functions of harmonic oscillators. Such errors can be significant for general correlation functions of anharmonic systems. We also demonstrate that the short time accuracy of the exact path integral limit of RPMD is of lower order than those for finite discretization of path. The

  2. Investigation of Ribosomes Using Molecular Dynamics Simulation Methods.

    PubMed

    Makarov, G I; Makarova, T M; Sumbatyan, N V; Bogdanov, A A

    2016-12-01

    The ribosome as a complex molecular machine undergoes significant conformational changes while synthesizing a protein molecule. Molecular dynamics simulations have been used as complementary approaches to X-ray crystallography and cryoelectron microscopy, as well as biochemical methods, to answer many questions that modern structural methods leave unsolved. In this review, we demonstrate that all-atom modeling of ribosome molecular dynamics is particularly useful in describing the process of tRNA translocation, atomic details of behavior of nascent peptides, antibiotics, and other small molecules in the ribosomal tunnel, and the putative mechanism of allosteric signal transmission to functional sites of the ribosome.

  3. Nonstationary hydrological time series forecasting using nonlinear dynamic methods

    NASA Astrophysics Data System (ADS)

    Coulibaly, Paulin; Baldwin, Connely K.

    2005-06-01

    Recent evidence of nonstationary trends in water resources time series as result of natural and/or anthropogenic climate variability and change, has raised more interest in nonlinear dynamic system modeling methods. In this study, the effectiveness of dynamically driven recurrent neural networks (RNN) for complex time-varying water resources system modeling is investigated. An optimal dynamic RNN approach is proposed to directly forecast different nonstationary hydrological time series. The proposed method automatically selects the most optimally trained network in any case. The simulation performance of the dynamic RNN-based model is compared with the results obtained from optimal multivariate adaptive regression splines (MARS) models. It is shown that the dynamically driven RNN model can be a good alternative for the modeling of complex dynamics of a hydrological system, performing better than the MARS model on the three selected hydrological time series, namely the historical storage volumes of the Great Salt Lake, the Saint-Lawrence River flows, and the Nile River flows.

  4. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  5. Parametric likelihood inference for interval censored competing risks data.

    PubMed

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  6. Accelerated molecular dynamics methods: introduction and recent developments

    SciTech Connect

    Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G

    2009-01-01

    A long-standing limitation in the use of molecular dynamics (MD) simulation is that it can only be applied directly to processes that take place on very short timescales: nanoseconds if empirical potentials are employed, or picoseconds if we rely on electronic structure methods. Many processes of interest in chemistry, biochemistry, and materials science require study over microseconds and beyond, due either to the natural timescale for the evolution or to the duration of the experiment of interest. Ignoring the case of liquids xxx, the dynamics on these time scales is typically characterized by infrequent-event transitions, from state to state, usually involving an energy barrier. There is a long and venerable tradition in chemistry of using transition state theory (TST) [10, 19, 23] to directly compute rate constants for these kinds of activated processes. If needed dynamical corrections to the TST rate, and even quantum corrections, can be computed to achieve an accuracy suitable for the problem at hand. These rate constants then allow them to understand the system behavior on longer time scales than we can directly reach with MD. For complex systems with many reaction paths, the TST rates can be fed into a stochastic simulation procedure such as kinetic Monte Carlo xxx, and a direct simulation of the advance of the system through its possible states can be obtained in a probabilistically exact way. A problem that has become more evident in recent years, however, is that for many systems of interest there is a complexity that makes it difficult, if not impossible, to determine all the relevant reaction paths to which TST should be applied. This is a serious issue, as omitted transition pathways can have uncontrollable consequences on the simulated long-time kinetics. Over the last decade or so, we have been developing a new class of methods for treating the long-time dynamics in these complex, infrequent-event systems. Rather than trying to guess in advance what

  7. Parameter estimation in X-ray astronomy using maximum likelihood

    NASA Technical Reports Server (NTRS)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  8. What Is the Best Method to Fit Time-Resolved Data? A Comparison of the Residual Minimization and the Maximum Likelihood Techniques As Applied to Experimental Time-Correlated, Single-Photon Counting Data.

    PubMed

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W

    2016-03-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  10. Dynamic analysis method of rotating shaft with magnetic pattern

    NASA Astrophysics Data System (ADS)

    Hsiao, Heng-Sheng; Shih, Shuo-Wu; Chang, Jen-Yuan (James)

    2017-05-01

    Vibration issue in rotating system has existed mostly due to imbalance response of a rotating shaft. In general, accelerometers or eddy current probes are commonly used to measure the shaft's dynamic changes in form of acceleration and displacement. Originated from coding and operating algorithm of hard disk drive, a method to detect the position error of read head is applied on measuring and analyzing dynamic response of rotating shaft system. By magnetizing the pattern on magnetic medium surrounded on the rotating shaft, analog signal can be acquired to determine the shaft's axial and radial displacement. In this paper, a method of dynamic measuring on rotating shaft is proposed by using magnetic pattern and validated through calibrated experiments.

  11. Vestige: Maximum likelihood phylogenetic footprinting

    PubMed Central

    Wakefield, Matthew J; Maxwell, Peter; Huttley, Gavin A

    2005-01-01

    Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational processes, DNA repair and

  12. Non-Concave Penalized Likelihood with NP-Dimensionality

    PubMed Central

    Fan, Jianqing; Lv, Jinchi

    2011-01-01

    Penalized likelihood methods are fundamental to ultra-high dimensional variable selection. How high dimensionality such methods can handle remains largely unknown. In this paper, we show that in the context of generalized linear models, such methods possess model selection consistency with oracle properties even for dimensionality of Non-Polynomial (NP) order of sample size, for a class of penalized likelihood approaches using folded-concave penalty functions, which were introduced to ameliorate the bias problems of convex penalty functions. This fills a long-standing gap in the literature where the dimensionality is allowed to grow slowly with the sample size. Our results are also applicable to penalized likelihood with the L1-penalty, which is a convex function at the boundary of the class of folded-concave penalty functions under consideration. The coordinate optimization is implemented for finding the solution paths, whose performance is evaluated by a few simulation examples and the real data analysis. PMID:22287795

  13. Do dynamic-based MR knee kinematics methods produce the same results as static methods?

    PubMed

    d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P

    2013-06-01

    MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.

  14. Adiabatic molecular-dynamics-simulation-method studies of kinetic friction

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Sokoloff, J. B.

    2005-06-01

    An adiabatic molecular-dynamics method is developed and used to study the Muser-Robbins model for dry friction (i.e., nonzero kinetic friction in the slow sliding speed limit). In this model, dry friction between two crystalline surfaces rotated with respect to each other is due to mobile molecules (i.e., dirt particles) adsorbed at the interface. Our adiabatic method allows us to quickly locate interface potential-well minima, which become unstable during sliding of the surfaces. Since dissipation due to friction in the slow sliding speed limit results from mobile molecules dropping out of such unstable wells, our method provides a way to calculate dry friction, which agrees extremely well with results found by conventional molecular dynamics for the same system, but our method is more than a factor of 10 faster.

  15. The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.

    ERIC Educational Resources Information Center

    Buchanan, Patricia A.; Ulrich, Beverly D.

    2001-01-01

    Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…

  16. Continuation Methods for Qualitative Analysis of Aircraft Dynamics

    NASA Technical Reports Server (NTRS)

    Cummings, Peter A.

    2004-01-01

    A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.

  17. Evaluating measurement equivalence using the item response theory log-likelihood ratio (IRTLR) method to assess differential item functioning (DIF): applications (with illustrations) to measures of physical functioning ability and general distress.

    PubMed

    Teresi, Jeanne A; Ocepek-Welikson, Katja; Kleinman, Marjorie; Cook, Karon F; Crane, Paul K; Gibbons, Laura E; Morales, Leo S; Orlando-Edelen, Maria; Cella, David

    2007-01-01

    Methods based on item response theory (IRT) that can be used to examine differential item functioning (DIF) are illustrated. An IRT-based approach to the detection of DIF was applied to physical function and general distress item sets. DIF was examined with respect to gender, age and race. The method used for DIF detection was the item response theory log-likelihood ratio (IRTLR) approach. DIF magnitude was measured using the differences in the expected item scores, expressed as the unsigned probability differences, and calculated using the non-compensatory DIF index (NCDIF). Finally, impact was assessed using expected scale scores, expressed as group differences in the total test (measure) response functions. The example for the illustration of the methods came from a study of 1,714 patients with cancer or HIV/AIDS. The measure contained 23 items measuring physical functioning ability and 15 items addressing general distress, scored in the positive direction. The substantive findings were of relatively small magnitude DIF. In total, six items showed relatively larger magnitude (expected item score differences greater than the cutoff) of DIF with respect to physical function across the three comparisons: "trouble with a long walk" (race), "vigorous activities" (race, age), "bending, kneeling stooping" (age), "lifting or carrying groceries" (race), "limited in hobbies, leisure" (age), "lack of energy" (race). None of the general distress items evidenced high magnitude DIF; although "worrying about dying" showed some DIF with respect to both age and race, after adjustment. The fact that many physical function items showed DIF with respect to age, even after adjustment for multiple comparisons, indicates that the instrument may be performing differently for these groups. While the magnitude and impact of DIF at the item and scale level was minimal, caution should be exercised in the use of subsets of these items, as might occur with selection for clinical decisions or

  18. Review of dynamic optimization methods in renewable natural resource management

    USGS Publications Warehouse

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  19. Microbubble dynamics monitoring using a dual modulation method.

    PubMed

    Fouan, Damien; Achaoui, Younes; Payan, Cédric; Mensah, Serge

    2015-02-01

    An experimental method for characterizing microbubbles' oscillations is presented. With a Dual Frequency ultrasound excitation method, both relative and absolute microbubble size variations can be measured. Using the same experimental setup, a simple signal processing step applied to both the amplitude and the frequency modulations yields a two-fold picture of microbubbles' dynamics. In addition, assuming the occurrence of small radial oscillations, the equilibrium radius of the microbubbles can be accurately estimated.

  20. The predictive integration method for dynamics of infrequent events

    NASA Astrophysics Data System (ADS)

    Cubuk, Ekin; Waterland, Amos; Kaxiras, Efthimios

    2012-02-01

    With the increasing prominence and availability of multi-processor computers, recasting problems in a form amenable to parallel solution is becoming a critical step in effective scientific computation. We present a method for parallelizing molecular dynamics simulations in time scale, by using predictive integration. Our method is closely related to Voter's parallel replica method, but goes beyond that approach in that it involves speculatively initializing processors in more than one basin. Our predictive integration method requires predicting possible future configurations while it does not suffer from restrictions due to correlation time after transitions between basins.

  1. Maximum likelihood inference of reticulate evolutionary histories

    PubMed Central

    Yu, Yun; Dong, Jianrong; Liu, Kevin J.; Nakhleh, Luay

    2014-01-01

    Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173

  2. Maximum likelihood inference of reticulate evolutionary histories.

    PubMed

    Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay

    2014-11-18

    Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships.

  3. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  4. Dynamic Rupture Benchmarking of the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Pelties, C.; Gabriel, A.

    2012-12-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  5. Dynamic Rupture Benchmarking of the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2013-04-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  6. A maximum likelihood estimation framework for delay logistic differential equation model

    NASA Astrophysics Data System (ADS)

    Mahmoud, Ahmed Adly; Dass, Sarat Chandra; Muthuvalu, Mohana S.

    2016-11-01

    This paper will introduce the maximum likelihood method of estimation for delay differential equation model governed by unknown delay and other parameters of interest followed by a numerical solver approach. As an example we consider the delayed logistic differential equation. A grid based estimation framework is proposed. Our methodology estimates correctly the delay parameter as well as the initial starting value of the dynamical system based on simulation data. The computations have been carried out with help of mathematical software: MATLAB® 8.0 R2012b.

  7. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    SciTech Connect

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  8. Identification of Sparse Neural Functional Connectivity using Penalized Likelihood Estimation and Basis Functions

    PubMed Central

    Song, Dong; Wang, Haonan; Tu, Catherine Y.; Marmarelis, Vasilis Z.; Hampson, Robert E.; Deadwyler, Sam A.; Berger, Theodore W.

    2013-01-01

    One key problem in computational neuroscience and neural engineering is the identification and modeling of functional connectivity in the brain using spike train data. To reduce model complexity, alleviate overfitting, and thus facilitate model interpretation, sparse representation and estimation of functional connectivity is needed. Sparsities include global sparsity, which captures the sparse connectivities between neurons, and local sparsity, which reflects the active temporal ranges of the input-output dynamical interactions. In this paper, we formulate a generalized functional additive model (GFAM) and develop the associated penalized likelihood estimation methods for such a modeling problem. A GFAM consists of a set of basis functions convolving the input signals, and a link function generating the firing probability of the output neuron from the summation of the convolutions weighted by the sought model coefficients. Model sparsities are achieved by using various penalized likelihood estimations and basis functions. Specifically, we introduce two variations of the GFAM using a global basis (e.g., Laguerre basis) and group LASSO estimation, and a local basis (e.g., B-spline basis) and group bridge estimation, respectively. We further develop an optimization method based on quadratic approximation of the likelihood function for the estimation of these models. Simulation and experimental results show that both group-LASSO-Laguerre and group-bridge-B-spline can capture faithfully the global sparsities, while the latter can replicate accurately and simultaneously both global and local sparsities. The sparse models outperform the full models estimated with the standard maximum likelihood method in out-of-sample predictions. PMID:23674048

  9. Dynamic Optical Grating Device and Associated Method for Modulating Light

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Choi, Sang H. (Inventor); King, Glen C. (Inventor); Chu, Sang-Hyon (Inventor)

    2012-01-01

    A dynamic optical grating device and associated method for modulating light is provided that is capable of controlling the spectral properties and propagation of light without moving mechanical components by the use of a dynamic electric and/or magnetic field. By changing the electric field and/or magnetic field, the index of refraction, the extinction coefficient, the transmittivity, and the reflectivity fo the optical grating device may be controlled in order to control the spectral properties of the light reflected or transmitted by the device.

  10. Analysis of Nonlinear Dynamics by Square Matrix Method

    SciTech Connect

    Yu, Li Hua

    2016-07-25

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. And more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.

  11. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  12. Maximum-likelihood estimation of admixture proportions from genetic data.

    PubMed Central

    Wang, Jinliang

    2003-01-01

    For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794

  13. Vortex element methods for fluid dynamic analysis of engineering systems

    NASA Astrophysics Data System (ADS)

    Lewis, Reginald Ivan

    The surface-vorticity method of computational fluid mechanics is described, with an emphasis on turbomachinery applications, in an introduction for engineers. Chapters are devoted to surface singularity modeling; lifting bodies, two-dimensional airfoils, and cascades; mixed-flow and radial cascades; bodies of revolution, ducts, and annuli; ducted propellers and fans; three-dimensional and meridional flows in turbomachines; free vorticity shear layers and inverse methods; vortex dynamics in inviscid flows; the simulation of viscous diffusion in discrete vortex modeling; vortex-cloud modeling by the boundary-integral method; vortex-cloud models for lifting bodies and cascades; and grid systems for vortex dynamics and meridional flows. Diagrams, graphs, and the listings for a set of computer programs are provided.

  14. Development of a transfer function method for dynamic stability measurement

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1977-01-01

    Flutter testing method based on transfer function measurements is developed. The error statistics of several dynamic stability measurement methods are reviewed. It is shown that the transfer function measurement controls the error level by averaging the data and correlating the input and output. The method also gives a direct estimate of the error in the response measurement. An algorithm is developed for obtaining the natural frequency and damping ratio of low damped modes of the system, using integrals of the transfer function in the vicinity of a resonant peak. Guidelines are given for selecting the parameters in the transfer function measurement. Finally, the dynamic stability measurement technique is applied to data from a wind tunnel test of a proprotor and wing model.

  15. Analysis of the human electroencephalogram with methods from nonlinear dynamics

    SciTech Connect

    Mayer-Kress, G.; Holzfuss, J.

    1986-09-08

    We apply several different methods from nonlinear dynamical systems to the analysis of the degree of temporal disorder in data from human EEG. Among these are methods of geometrical reconstruction, dimensional complexity, mutual information content, and two different approaches for estimating Lyapunov characteristic exponents. We show how the naive interpretation of numerical results can lead to a considerable underestimation of the dimensional complexity. This is true even when the errors from least squares fits are small. We present more realistic error estimates and show that they seem to contain additional, important information. By applying independent methods of analysis to the same data sets for a given lead, we find that the degree of temporal disorder is minimal in a ''resting awake'' state and increases in sleep as well as in fluroxene induced general anesthesia. At the same time the statistical errors appear to decrease, which can be interpretated as a transition to a more uniform dynamical state. 29 refs., 10 figs.

  16. Likelihood maximization for list-mode emission tomographic image reconstruction.

    PubMed

    Byrne, C

    2001-10-01

    The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.

  17. System and method for reducing combustion dynamics in a combustor

    SciTech Connect

    Uhm, Jong Ho; Ziminsky, Willy Steve; Johnson, Thomas Edward; Srinivasan, Shiva; York, William David

    2016-11-29

    A system for reducing combustion dynamics in a combustor includes an end cap that extends radially across the combustor and includes an upstream surface axially separated from a downstream surface. A combustion chamber is downstream of the end cap, and tubes extend from the upstream surface through the downstream surface. Each tube provides fluid communication through the end cap to the combustion chamber. The system further includes means for reducing combustion dynamics in the combustor. A method for reducing combustion dynamics in a combustor includes flowing a working fluid through tubes that extend axially through an end cap that extends radially across the combustor and obstructing at least a portion of the working fluid flowing through a first set of the tubes.

  18. A Non-smooth Newton Method for Multibody Dynamics

    SciTech Connect

    Erleben, K.; Ortiz, R.

    2008-09-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  19. A likelihood ratio test for nested proportions.

    PubMed

    Chen, Yi-Fan; Yabes, Jonathan G; Brooks, Maria M; Singh, Sonia; Weissfeld, Lisa A

    2015-02-10

    For policy and medical issues, it is important to know if the proportion of an event changes after an intervention is administered. When the later proportion can only be calculated in a portion of the sample used to compute the previous proportion, the two proportions are nested. The motivating example for this work comes from the need to test whether admission rates in emergency departments are different between the first and a return visit. Here, subjects who contribute to the admission rate at the return visit must be included in the first rate and also return, but not vice versa. This conditionality means that existing methods, including the basic test of equality of two proportions, longitudinal data analysis methods, and recurrent event approaches are not directly applicable. Currently, researchers can only explore this question by the use of descriptive statistics. We propose a likelihood ratio test to compare two nested proportions by using the product of conditional probabilities. This test accommodates the conditionality, subject dependencies, and cluster effects and can be implemented in SAS PROC NLMIXED allowing for the proposed method to be readily used in an applied setting. Simulation studies showed that our approach provides unbiased estimates and reasonable power. Moreover, it generally outperforms the two-sample proportion z-test, in the presence of heterogeneity, and the Cochran-Mantel-Haenszel test. An example based on readmission rates through an emergency department is used to illustrate the proposed method. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  1. Analysis of nonlinear dynamics by square matrix method

    NASA Astrophysics Data System (ADS)

    Yu, Li Hua

    2017-03-01

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. We show that because of the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculations to a low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The Jordan decomposition leads to a transformation to a new variable, which is an accurate action-angle variable, in good agreement with trajectories and tune obtained from tracking. More importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and tune fluctuation. Thus the square matrix theory shows a good potential in theoretical understanding of a complicated dynamical system to guide the optimization of dynamical apertures. The method is illustrated by many examples of comparison between theory and numerical simulation. In particular, we show that the square matrix method can be used for fast optimization to reduce the nonlinearity of a system.

  2. Population Dynamics of the Stationary Phase Utilizing the ARGOS Method

    NASA Astrophysics Data System (ADS)

    Algarni, S.; Charest, A. J.; Iannacchione, G. S.

    2015-03-01

    The Area Recorded Generalized Optical Scattering (ARGOS) approach to light scattering employs large image capture array allowing for a well-defined geometry in which images may be manipulated to extract structure with intensity at a specific scattering wave vector (I(q)) and dynamics with intensity at a specific scattering wave vector over time (I (q,t)). The ARGOS method provides morphological dynamics noninvasively over a long time period and allows for a variety of aqueous conditions. This is important because traditional growth models do not provide for conditions similar to the natural environment. The present study found that the population dynamics of bacteria do not follow a traditional growth model and that the ARGOS method allowed for the observation of bacterial changes in terms of individual particles and population dynamics in real time. The observations of relative total intensity suggest that there is no stationary phase and that the bacterial population demonstrates sinusoidal type patterns consistently subsequent to the log phase growth. These observation were compared to shape changes by modeling fractal dimension and size changes by modeling effective radius.

  3. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  4. Search area Expanding Strategy and Dynamic Priority Setting Method in the Improved 2-opt Method

    NASA Astrophysics Data System (ADS)

    Matayoshi, Mitsukuni; Nakamura, Morikazu; Miyagi, Hayao

    We propose a new 2-opt base method in a Memetic algorithm, that is, Genetic Algorithms(GAs) with a local search. The basic idea is from the fast 2-opt(1) method and the improved 2-opt method(20). Our new search method uses the “Priority" employed in the improved 2-opt method. The “Priority" represents the contribution level in exchange of genes. Matayoshi's method exchanges genes based on previous contribution to the fitness value improvement. We propose a new search method by using the concept of the Priority. We call our method the search area expanding strategy method in the improved 2-opt method. Our method escalates the search area by using “Priority". In computer experiment, it is shown that the computation time to find exact solution depends on the value of the Priority. If our method does not set an appropriate priority beforehand, then we propose the method to adapt to suitable value. If improvement does not achieved for certain generations, our dynamic priority method tries to modify the priority by the mutation operation. Experimental results show that the search area expanding strategy method embedded with the dynamic priority setting method can find the exact solution at earlier generation than other methods for comparison.

  5. Carrier Recovery Enhancement for Maximum-Likelihood Doppler Shift Estimation in Mars Exploration Missions

    NASA Astrophysics Data System (ADS)

    Cattivelli, Federico S.; Estabrook, Polly; Satorius, Edgar H.; Sayed, Ali H.

    2008-11-01

    One of the most crucial stages of the Mars exploration missions is the entry, descent, and landing (EDL) phase. During EDL, maintaining reliable communication from the spacecraft to Earth is extremely important for the success of future missions, especially in case of mission failure. EDL is characterized by very deep accelerations, caused by friction, parachute deployment and rocket firing among others. These dynamics cause a severe Doppler shift on the carrier communications link to Earth. Methods have been proposed to estimate the Doppler shift based on Maximum Likelihood. So far these methods have proved successful, but it is expected that the next Mars mission, known as the Mars Science Laboratory, will suffer from higher dynamics and lower SNR. Thus, improving the existing estimation methods becomes a necessity. We propose a Maximum Likelihood approach that takes into account the power in the data tones to enhance carrier recovery, and improve the estimation performance by up to 3 dB. Simulations are performed using real data obtained during the EDL stage of the Mars Exploration Rover B (MERB) mission.

  6. Analysis methods for wind turbine control and electrical system dynamics

    NASA Technical Reports Server (NTRS)

    Hinrichsen, E. N.

    1995-01-01

    The integration of new energy technologies into electric power systems requires methods which recognize the full range of dynamic events in both the new generating unit and the power system. Since new energy technologies are initially perceived as small contributors to large systems, little attention is generally paid to system integration, i.e. dynamic events in the power system are ignored. As a result, most new energy sources are only capable of base-load operation, i.e. they have no load following or cycling capability. Wind turbines are no exception. Greater awareness of this implicit (and often unnecessary) limitation is needed. Analysis methods are recommended which include very low penetration (infinite bus) as well as very high penetration (stand-alone) scenarios.

  7. Comparing dynamical systems by a graph matching method

    NASA Astrophysics Data System (ADS)

    Zheng, Jiongxuan; Skufca, Joseph D.; Bollt, Erik M.

    2013-07-01

    In this paper, we consider comparing dynamical systems by using a method of graph matching, either between the graphs representing the underlying symbolic dynamics, or between the graphs approximating the action of the systems on a fine but otherwise non-generating partition. For conjugate systems, the graphs are isomorphic and we show that the permutation matrices that relate the adjacency matrices coincide with the solution of Monge’s mass transport problem. We use the underlying earth mover’s distance (EMD) to generate the “approximate” matching matrix to illustrate the association of graphs which are derived from equal-distance partitioning of the phase spaces of systems. In addition, for one system which embeds into the other, we show that the comparison of these two systems by our method is an issue of subgraph matching.

  8. Sensitivity evaluation of dynamic speckle activity measurements using clustering methods.

    PubMed

    Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H

    2010-07-01

    We evaluate and compare the use of competitive neural networks, self-organizing maps, the expectation-maximization algorithm, K-means, and fuzzy C-means techniques as partitional clustering methods, when the sensitivity of the activity measurement of dynamic speckle images needs to be improved. The temporal history of the acquired intensity generated by each pixel is analyzed in a wavelet decomposition framework, and it is shown that the mean energy of its corresponding wavelet coefficients provides a suited feature space for clustering purposes. The sensitivity obtained by using the evaluated clustering techniques is also compared with the well-known methods of Konishi-Fujii, weighted generalized differences, and wavelet entropy. The performance of the partitional clustering approach is evaluated using simulated dynamic speckle patterns and also experimental data.

  9. A Dynamic Interval Decision-Making Method Based on GRA

    NASA Astrophysics Data System (ADS)

    Xue-jun, Tang; Jia, Chen

    According to the basic theory of grey relational analysis, this paper constructs a three-dimensional grey interval relation degree model for the three dimensions of time, index and scheme. On its basis, it sets up and solves a single-targeted optimization model, and obtains each scheme's affiliate degree for the positive/negative ideal scheme and also arranges the schemes in sequence. The result shows that the three-dimensional grey relation degree simplifies the traditional dynamic multi-attribute decision-making method and can better resolve the dynamic multi-attribute decision-making method of interval numbers. Finally, this paper proves the practicality and efficiency of the model through a case study.

  10. Least-squares finite element method for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1989-01-01

    An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

  11. Advanced three-dimensional dynamic analysis by boundary element methods

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Ahma, S.

    1985-01-01

    Advanced formulations of boundary element method for periodic, transient transform domain and transient time domain solution of three-dimensional solids have been implemented using a family of isoparametric boundary elements. The necessary numerical integration techniques as well as the various solution algorithms are described. The developed analysis has been incorporated in a fully general purpose computer program BEST3D which can handle up to 10 subregions. A number of numerical examples are presented to demonstrate the accuracy of the dynamic analyses.

  12. Advanced three-dimensional dynamic analysis by boundary element methods

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Ahma, S.

    1985-01-01

    Advanced formulations of boundary element method for periodic, transient transform domain and transient time domain solution of three-dimensional solids have been implemented using a family of isoparametric boundary elements. The necessary numerical integration techniques as well as the various solution algorithms are described. The developed analysis has been incorporated in a fully general purpose computer program BEST3D which can handle up to 10 subregions. A number of numerical examples are presented to demonstrate the accuracy of the dynamic analyses.

  13. A method for analyzing dynamic stall of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Crimi, P.; Reeves, B. L.

    1972-01-01

    A model for each of the basic flow elements involved in the unsteady stall of a two-dimensional airfoil in incompressible flow is presented. The interaction of these elements is analyzed using a digital computer. Computations of the loading during transient and sinusoidal pitching motions are in good qualitative agreement with measured loads. The method was used to confirm that large torsional response of helicopter blades detected in flight tests can be attributed to dynamic stall.

  14. Efficient methods for studying stochastic disease and population dynamics.

    PubMed

    Keeling, M J; Ross, J V

    2009-01-01

    Stochastic ecological and epidemiological models are now routinely used to inform management and decision making throughout conservation and public-health. A difficulty with the use of such models is the need to resort to simulation methods when the population size (and hence the size of the state space) becomes large, resulting in the need for a large amount of computation to achieve statistical confidence in results. Here we present two methods that allow evaluation of all quantities associated with one- (and higher) dimensional Markov processes with large state spaces. We illustrate these methods using SIS disease dynamics and studying species that are affected by catastrophic events. The methods allow the possibility of extending exact Markov methods to real-world problems, providing techniques for efficient parameterisation and subsequent analysis.

  15. Maximum likelihood estimates of polar motion parameters

    NASA Technical Reports Server (NTRS)

    Wilson, Clark R.; Vicente, R. O.

    1990-01-01

    Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.

  16. Comparison of induced rules based on likelihood estimation

    NASA Astrophysics Data System (ADS)

    Tsumoto, Shusaku

    2002-03-01

    Rule induction methods have been applied to knowledge discovery in databases and data mining, The empirical results obtained show that they are very powerful and that important knowledge has been extracted from datasets. However, comparison and evaluation of rules are based not on statistical evidence but on rather naive indices, such as conditional probabilities and functions of conditional probabilities. In this paper, we introduce two approaches to induced statistical comparison of induced rules. For the statistical evaluation, likelihood ratio test and Fisher's exact test play an important role: likelihood ratio statistic measures statistical information about an information table and it is used to measure the difference between two tables.

  17. Smart algorithms and adaptive methods in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Tinsley Oden, J.

    1989-05-01

    A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.

  18. Maximum likelihood for genome phylogeny on gene content.

    PubMed

    Zhang, Hongmei; Gu, Xun

    2004-01-01

    With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.

  19. Numerical continuation methods for large-scale dissipative dynamical systems

    NASA Astrophysics Data System (ADS)

    Umbría, Juan Sánchez; Net, Marta

    2016-11-01

    A tutorial on continuation and bifurcation methods for the analysis of truncated dissipative partial differential equations is presented. It focuses on the computation of equilibria, periodic orbits, their loci of codimension-one bifurcations, and invariant tori. To make it more self-contained, it includes some definitions of basic concepts of dynamical systems, and some preliminaries on the general underlying techniques used to solve non-linear systems of equations by inexact Newton methods, and eigenvalue problems by means of subspace or Arnoldi iterations.

  20. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  1. Maximum Marginal Likelihood Estimation for Semiparametric Item Analysis.

    ERIC Educational Resources Information Center

    Ramsay, J. O.; Winsberg, S.

    1991-01-01

    A method is presented for estimating the item characteristic curve (ICC) using polynomial regression splines. Estimation of spline ICCs is described by maximizing the marginal likelihood formed by integrating ability over a beta prior distribution. Simulation results compare this approach with the joint estimation of ability and item parameters.…

  2. Maximum likelihood Jukes-Cantor triplets: analytic solutions.

    PubMed

    Chor, Benny; Hendy, Michael D; Snir, Sagi

    2006-03-01

    Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.

  3. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  4. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  5. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  6. A support-operator method for 3-D rupture dynamics

    NASA Astrophysics Data System (ADS)

    Ely, Geoffrey P.; Day, Steven M.; Minster, Jean-Bernard

    2009-06-01

    We present a numerical method to simulate spontaneous shear crack propagation within a heterogeneous, 3-D, viscoelastic medium. Wave motions are computed on a logically rectangular hexahedral mesh, using the generalized finite-difference method of Support Operators (SOM). This approach enables modelling of non-planar surfaces and non-planar fault ruptures. Our implementation, the Support Operator Rupture Dynamics (SORD) code, is highly scalable, enabling large-scale, multiprocessors calculations. The fault surface is modelled by coupled double nodes, where rupture occurs as dictated by the local stress conditions and a frictional failure law. The method successfully performs test problems developed for the Southern California Earthquake Center (SCEC)/U.S. Geological Survey (USGS) dynamic earthquake rupture code validation exercise, showing good agreement with semi-analytical boundary integral method results. We undertake further dynamic rupture tests to quantify numerical errors introduced by shear deformations to the hexahedral mesh. We generate a family of meshes distorted by simple shearing, in the along-strike direction, up to a maximum of 73°. For SCEC/USGS validation problem number 3, grid-induced errors increase with mesh shear angle, with the logarithm of error approximately proportional to angle over the range tested. At 73°, rms misfits are about 10 per cent for peak slip rate, and 0.5 per cent for both rupture time and total slip, indicating that the method (which, up to now, we have applied mainly to near-vertical strike-slip faulting) is also capable of handling geometries appropriate to low-angle surface-rupturing thrust earthquakes. Additionally, we demonstrate non-planar rupture effects, by modifying the test geometry to include, respectively, cylindrical curvature and sharp kinks.

  7. Maximum likelihood estimates of pairwise rearrangement distances.

    PubMed

    Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R

    2017-06-21

    Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A Poisson-Boltzmann dynamics method with nonperiodic boundary condition

    NASA Astrophysics Data System (ADS)

    Lu, Qiang; Luo, Ray

    2003-12-01

    We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.

  9. A Method for Molecular Dynamics on Curved Surfaces

    PubMed Central

    Paquay, Stefan; Kusters, Remy

    2016-01-01

    Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focused on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to take such interactions into account by combining standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates, allowing for the reuse of many other standard tools without modifications, including parallelization through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, we obtain confined Brownian motion, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: 1) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes; and 2) the self-assembly of a coarse-grained virus capsid protein model. PMID:27028633

  10. Empirical Likelihood-Based Confidence Interval of ROC Curves.

    PubMed

    Su, Haiyan; Qin, Yongsong; Liang, Hua

    2009-11-01

    In this article we propose an empirical likelihood-based confidence interval for receiver operating characteristic curves which are based on a continuous-scale test. The approach is easily understood, simply implemented, and computationally efficient. The results from our simulation studies indicate that the finite-sample numerical performance slightly outperforms the most promising methods published recently. Two real datasets are analyzed by using the proposed method and the existing bootstrap-based method.

  11. Dynamic Analysis of a Spur Gear by the Dynamic Stiffness Method

    NASA Astrophysics Data System (ADS)

    HUANG, K. J.; LIU, T. S.

    2000-07-01

    This study treats a spur gear tooth as a variable cross-section Timoshenko beam to construct a dynamic model, being able to obtain transient response for spur gears of involute profiles. The dynamic responses of a single tooth and a gear pair are investigated. Firstly, polynomials are used to represent the gear blank and the tooth profile. The dynamic stiffness matrix and natural frequencies of the gear are in turn calculated. The forced response of a tooth subject to a shaft-driven transmission torque is calculated by performing modal analysis. This study takes into account time-varying stiffness and mass matrices and the gear meshing forces at moving meshing points. The forced response at arbitrary points in a gear tooth can be obtained. Calculation results of fillet stresses and strains are compared with those in the literature to verify the proposed method.

  12. Dynamic Methods for Investigating the Conformational Changes of Biological Macromolecules

    NASA Astrophysics Data System (ADS)

    Vidolova-Angelova, E.; Peshev, Z.; Shaquiri, Z.; Angelov, D.

    2010-01-01

    Fast conformational changes of biological macromolecules such as RNA folding and DNA—protein interactions play a crucial role in their biological functions. Conformational changes are supposed to take place in the sub milliseconds to few seconds time range. The development of appropriate dynamic methods possessing both high space (one nucleotide) and time resolution is of important interest. Here, we present two different approaches we developed for studying nucleic acid conformational changes such as salt-induced tRNA folding and interaction of the transcription factor NF-κB with its recognition DNA sequence. Importantly, only a single laser pulse is sufficient for the accurate measuring the whole decay curve. This peculiarity can be used in dynamical experiments.

  13. Evaluation of the sensing block method for dynamic force measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Qinghui; Chen, Hao; Li, Wenzhao; Song, Li

    2017-01-01

    Sensing block method was proposed for the dynamic force measurement by Tanimura et al. in 1994. Comparing with the Split Hopkinson pressure bar (SHPB) technique, it can provide a much longer measuring time for the dynamic properties test of materials. However, the signals recorded by sensing block are always accompanied with additional oscillations. Tanimura et al. discussed the effect of force rising edge on the test results, whereas more research is still needed. In this paper, some more dominant factors have been extracted through dimensional analysis. The finite element simulation has been performed to assess these factors. Base on the analysis and simulation, some valuable results are obtained and some criterions proposed in this paper can be applied in design or selection of the sensing block.

  14. System and method for reducing combustion dynamics in a combustor

    DOEpatents

    Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David

    2015-09-01

    A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend from the upstream surface through the downstream surface. A divider inside a tube bundle defines a diluent passage that extends axially through the downstream surface, and a diluent supply in fluid communication with the divider provides diluent flow to the diluent passage. A method for reducing combustion dynamics in a combustor includes flowing a fuel through tube bundles, flowing a diluent through a diluent passage inside a tube bundle, wherein the diluent passage extends axially through at least a portion of the end cap into a combustion chamber, and forming a diluent barrier in the combustion chamber between the tube bundle and at least one other adjacent tube bundle.

  15. System and method for reducing combustion dynamics in a combustor

    DOEpatents

    Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David

    2013-08-20

    A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend through the end cap. A diluent supply in fluid communication with the end cap provides diluent flow to the end cap. Diluent distributors circumferentially arranged inside at least one tube bundle extend downstream from the downstream surface and provide fluid communication for the diluent flow through the end cap. A method for reducing combustion dynamics in a combustor includes flowing fuel through tube bundles that extend axially through an end cap, flowing a diluent through diluent distributors into a combustion chamber, wherein the diluent distributors are circumferentially arranged inside at least one tube bundle and each diluent distributor extends downstream from the end cap, and forming a diluent barrier in the combustion chamber between at least one pair of adjacent tube bundles.

  16. Censored Median Regression and Profile Empirical Likelihood

    PubMed Central

    Subramanian, Sundarraman

    2007-01-01

    We implement profile empirical likelihood based inference for censored median regression models. Inference for any specified sub-vector is carried out by profiling out the nuisance parameters from the “plug-in” empirical likelihood ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical likelihood ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical likelihood, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set. PMID:19112527

  17. A Steady Method for the Analysis of Evaporation Dynamics.

    PubMed

    Gunay, Ahmet Alperen; Sett, Soumyadip; Oh, Junho; Miljkovic, Nenad

    2017-09-26

    Droplet evaporation is an important phenomenon governing many man-made and natural processes. Characterizing the rate of evaporation with high accuracy has attracted the attention of numerous scientists over the past century. Traditionally, researchers have studied evaporation by observing the change in the droplet size in a given time interval. However, the transient nature coupled with the significant mass-transfer governed gas-dynamics occurring at the droplet three-phase contact line make the classical method crude. Furthermore, the intricate balance played by the internal and external flows, evaporation kinetics, thermocapillarity, binary-mixture dynamics, curvature, and moving contact lines make the decoupling of these processes impossible with classical transient methods. Here, we present a method to measure the rate of evaporation of spatially-steady droplets. By utilizing a piezoelectric dispenser to feed microscale droplets (R ≈ 9 µm) to a larger evaporating droplet at a prescribed frequency, we can both create variable-sized droplets on any surface, and study their evaporation rate by modulating the piezoelectric droplet addition frequency. Using our spatially-steady technique, we studied water evaporation of droplets having base radii ranging from 20 µm to 250 µm on surfaces of different functionalities (45˚ ≤ θ≤ 162˚, where θ is the apparent advancing contact angle). We benchmarked our technique with the classical un-steady method showing an improvement of 140% in evaporation rate measurement accuracy. Our work not only characterizes the evaporation dynamics on functional surfaces, it also provides an experimental platform to finally enable the decoupling of the complex physics governing the ubiquitous droplet evaporation process.

  18. Assessment of moisturizers and barrier function restoration using dynamic methods.

    PubMed

    Rosado, Catarina; Pinto, Pedro; Rodrigues, Luis Monteiro

    2009-02-01

    Dynamic methods, such as the mathematical modelling of the transepidermal water loss curves that result from a plastic occlusion stress test (POST), enable the complete characterization of the dynamic water balance established between the deep and the superficial skin structures. Previous studies have indicated that this methodology was able to detect impaired barrier function and differentiate normal and dry skin. The objective of the present study is to apply the discriminative capacity of the model to the efficacy testing of moisturizing products. A moisturizing cream containing urea was applied for 2 weeks to the lower leg of volunteers with dry skin. After this period, a POST was performed in the treated and control untreated site and a mathematical model was adjusted to the resulting desorption curves. Results indicate that, after treatment of the skin with the cream for 2 weeks, statistically different kinetic parameters are obtained in the treated site, which suggests an improvement in the water dynamic balance and skin barrier. The data generated by this study was compared with that obtained in a similar investigation that was performed a year earlier. Analysis of the dispersion of results indicates that the method has good reproducibility. These findings confirm that the method has enough sensitivity to assess in vivo the effect of moisturizers on human skin, and also that this evaluation can be performed in a shorter period than that required by the regression method. This methodology can also be considered more informative, because the analysis of the different parameters obtained by the model is indicative of the mechanism of action of the product applied to the skin.

  19. Evolutionary dynamics on graphs: Efficient method for weak selection

    NASA Astrophysics Data System (ADS)

    Fu, Feng; Wang, Long; Nowak, Martin A.; Hauert, Christoph

    2009-04-01

    Investigating the evolutionary dynamics of game theoretical interactions in populations where individuals are arranged on a graph can be challenging in terms of computation time. Here, we propose an efficient method to study any type of game on arbitrary graph structures for weak selection. In this limit, evolutionary game dynamics represents a first-order correction to neutral evolution. Spatial correlations can be empirically determined under neutral evolution and provide the basis for formulating the game dynamics as a discrete Markov process by incorporating a detailed description of the microscopic dynamics based on the neutral correlations. This framework is then applied to one of the most intriguing questions in evolutionary biology: the evolution of cooperation. We demonstrate that the degree heterogeneity of a graph impedes cooperation and that the success of tit for tat depends not only on the number of rounds but also on the degree of the graph. Moreover, considering the mutation-selection equilibrium shows that the symmetry of the stationary distribution of states under weak selection is skewed in favor of defectors for larger selection strengths. In particular, degree heterogeneity—a prominent feature of scale-free networks—generally results in a more pronounced increase in the critical benefit-to-cost ratio required for evolution to favor cooperation as compared to regular graphs. This conclusion is corroborated by an analysis of the effects of population structures on the fixation probabilities of strategies in general 2×2 games for different types of graphs. Computer simulations confirm the predictive power of our method and illustrate the improved accuracy as compared to previous studies.

  20. Corrected profile likelihood confidence interval for binomial paired incomplete data.

    PubMed

    Pradhan, Vivek; Menon, Sandeep; Das, Ujjwal

    2013-01-01

    Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood-based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood-based methods are also presented.

  1. Predicting crash likelihood and severity on freeways with real-time loop detector data.

    PubMed

    Xu, Chengcheng; Tarko, Andrew P; Wang, Wei; Liu, Pan

    2013-08-01

    Real-time crash risk prediction using traffic data collected from loop detector stations is useful in dynamic safety management systems aimed at improving traffic safety through application of proactive safety countermeasures. The major drawback of most of the existing studies is that they focus on the crash risk without consideration of crash severity. This paper presents an effort to develop a model that predicts the crash likelihood at different levels of severity with a particular focus on severe crashes. The crash data and traffic data used in this study were collected on the I-880 freeway in California, United States. This study considers three levels of crash severity: fatal/incapacitating injury crashes (KA), non-incapacitating/possible injury crashes (BC), and property-damage-only crashes (PDO). The sequential logit model was used to link the likelihood of crash occurrences at different severity levels to various traffic flow characteristics derived from detector data. The elasticity analysis was conducted to evaluate the effect of the traffic flow variables on the likelihood of crash and its severity.The results show that the traffic flow characteristics contributing to crash likelihood were quite different at different levels of severity. The PDO crashes were more likely to occur under congested traffic flow conditions with highly variable speed and frequent lane changes, while the KA and BC crashes were more likely to occur under less congested traffic flow conditions. High speed, coupled with a large speed difference between adjacent lanes under uncongested traffic conditions, was found to increase the likelihood of severe crashes (KA). This study applied the 20-fold cross-validation method to estimate the prediction performance of the developed models. The validation results show that the model's crash prediction performance at each severity level was satisfactory. The findings of this study can be used to predict the probabilities of crash at

  2. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  3. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  4. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics.

    PubMed

    Miao, Linling; Young, Charles D; Sing, Charles E

    2017-07-14

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N(3)). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N(2) - N(2.25)), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  5. Informative Parameters of Dynamic Geo-electricity Methods

    NASA Astrophysics Data System (ADS)

    Tursunmetov, R.

    With growing complexity of geological tasks and revealing abnormality zones con- nected with ore, oil, gas and water availability, methods of dynamic geo-electricity started to be used. In these methods geological environment is considered as inter- phase irregular one. Main dynamic element of this environment is double electric layer, which develops on the boundary between solid and liquid phase. In ore or wa- ter saturated environment double electric layers become electrochemical or electro- kinetic active elements of geo-electric environment, which, in turn, form natural elec- tric field. Mentioned field influences artificially created field distribution and inter- action bear complicated super-position or non-linear character. Therefore, geological environment is considered as active one, which is able to accumulate and transform artificially superpositioned fields. Main dynamic property of this environment is non- liner behavior of specific electric resistance and soil polarization depending on current density and measurements frequency, which serve as informative parameters for dy- namic geo-electricity methods. Study of disperse soil electric properties in impulse- frequency regime with study of temporal and frequency characteristics of electric field is of main interest for definition of geo-electric abnormality. Volt-amperic characteris- tics of electromagnetic field study has big practical significance. These characteristics are determined by electric-chemically active ore and water saturated fields. Mentioned parameters depend on initiated field polarity, in particular on ore saturated zone's character, composition and mineralization and natural electric field availability un- der cathode and anode mineralization. Non-linear behavior of environment's dynamic properties impacts initiated field structure that allows to define abnormal zone loca- tion. And, finally, study of soil anisotropy dynamic properties in space will allow to identify filtration flows

  6. 3D Dynamic Crack Rupture by a Finite Volume Method

    NASA Astrophysics Data System (ADS)

    Ben Jemaa, M.; Glinsky-Olivier, N.; Cruz-Atienza, V. M.; Virieux, J.

    2007-12-01

    Dynamic rupture of a 3D spontaneous crack of arbitrary shape has been investigated using a Finite Volume (FV) approach. The full domain is decomposed in tetrahedra while the surface on which the rupture is supposed to take place is discretized with triangles which are faces of tetrahedra. Because of this meshing strategy, any shape of the rupture surface could be designed and is performed once before simulations start. First of all, the elastodynamic equations are described into a pseudo-conservative form for easy application of the FV discretisation. Explicit boundary conditions are given using criteria based on the conservation of discrete energy through the crack surface. Using a stress-threshold criterion, these conditions specify fluxes through those triangles which have suffered rupture. On these broken surfaces, stress follows A linear slip-weakening law although other friction laws can be implemented as well. Numerical solutions on a planar fault are achieved for the problem version 3 of the SCEC community dynamic-rupture benchmark exercise (Harris and Archuleta, 2004) and compared with those provided by a Finite Difference (FD) technique (Day et al, 2005). Another benchmark problem is also tackled involving a nonplanar curved fault (Cruz-Atienza et al, 2007). Solutions for this difficult exercise are compared with those computed with a Boundary Integral (BI) method (Aochi et al, 2000). In both benchmarck problems, comparisons show that rupture fronts are well modelled with a slight delay in time especially along the antiplane direction related to the low-order interpolation of the FV approach which requires further mesh refinement or/and an higher-order interpolation strategy as for Galerkin Discontinuous approach. Slip-rate and shear stress amplitudes are well modelled as well as stopping phases and stress overshoots. We expect this method, which is well adapted to multi-preocessor parallel computing to be competitive with others for solving large scale

  7. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  8. Improved sensitivity of dynamic CT with a new visualization method for radial distribution of lung nodule enhancement

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Wormanns, Dag; Beyer, Florian; Blaffert, Thomas; Buelow, Thomas

    2005-04-01

    For differential diagnosis of pulmonary nodules, assessment of contrast enhancement at chest CT scans after administration of contrast agent has been suggested. Likelihood of malignancy is considered very low if the contrast enhancement is lower than a certain threshold (10-20 HU). Automated average density measurement methods have been developed for that purpose. However, a certain fraction of malignant nodules does not exhibit significant enhancement when averaged over the whole nodule volume. The purpose of this paper is to test a new method for reduction of false negative results. We have investigated a method of showing not only a single averaged contrast enhancement number, but a more detailed enhancement curve for each nodule, showing the enhancement as a function of distance to boundary. A test set consisting of 11 malignant and 11 benign pulmonary lesions was used for validation, with diagnoses known from biopsy or follow-up for more than 24 months. For each nodule dynamic CT scans were available: the unenhanced native scan and scans after 60, 120, 180 and 240 seconds after onset of contrast injection (1 - 4 mm reconstructed slice thickness). The suggested method for measurement and visualization of contrast enhancement as radially resolved curves has reduced false negative results (apparently unenhancing but truly malignant nodules), and thus improved sensitivity. It proved to be a valuable tool for differential diagnosis between malignant and benign lesions using dynamic CT.

  9. Simulation of plume dynamics by the Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Mora, Peter; Yuen, David A.

    2017-09-01

    The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.

  10. Some splitting methods for equations of geophysical fluid dynamics

    NASA Astrophysics Data System (ADS)

    Ji, Zhongzhen; Wang, Bin

    1995-03-01

    In this paper, equations of atmospheric and oceanic dynamics are reduced to a kind of evolutionary equation in operator form, based on which a conclusion that the separability of motion stages is relative is made and an issue that the tractional splitting methods established on the physical separability of the fast stage and the slow stage neglect the interaction between the two stages to some extent is shown. Also, three splitting patterns are summed up from the splitting methods in common use so that a comparison between them is carried out. The comparison shows that only the improved splitting pattern (ISP) can be in second order and keep the interaction well. Finally, the applications of some splitting methods on numerical simulations of typhoon tracks made clear that ISP owns the best effect and can save more than 80% CPU time.

  11. The Piecewise Cubic Method (PCM) for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Lee, Dongwook; Faller, Hugues; Reyes, Adam

    2017-07-01

    We present a new high-order finite volume reconstruction method for hyperbolic conservation laws. The method is based on a piecewise cubic polynomial which provides its solutions a fifth-order accuracy in space. The spatially reconstructed solutions are evolved in time with a fourth-order accuracy by tracing the characteristics of the cubic polynomials. As a result, our temporal update scheme provides a significantly simpler and computationally more efficient approach in achieving fourth order accuracy in time, relative to the comparable fourth-order Runge-Kutta method. We demonstrate that the solutions of PCM converges at fifth-order in solving 1D smooth flows described by hyperbolic conservation laws. We test the new scheme on a range of numerical experiments, including both gas dynamics and magnetohydrodynamics applications in multiple spatial dimensions.

  12. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  13. Discriminative likelihood score weighting based on acoustic-phonetic classification for speaker identification

    NASA Astrophysics Data System (ADS)

    Suh, Youngjoo; Kim, Hoirin

    2014-12-01

    In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.

  14. A novel method to study cerebrospinal fluid dynamics in rats

    PubMed Central

    Karimy, Jason K.; Kahle, Kristopher T.; Kurland, David B.; Yu, Edward; Gerzanich, Volodymyr; Simard, J. Marc

    2014-01-01

    Background Cerebrospinal fluid (CSF) flow dynamics play critical roles in both the immature and adult brain, with implications for neurodevelopment and disease processes such as hydrocephalus and neurodegeneration. Remarkably, the only reported method to date for measuring CSF formation in laboratory rats is the indirect tracer dilution method (a.k.a., ventriculocisternal perfusion), which has limitations. New Method Anesthetized rats were mounted in a stereotaxic apparatus, both lateral ventricles were cannulated, and the Sylvian aqueduct was occluded. Fluid exited one ventricle at a rate equal to the rate of CSF formation plus the rate of infusion (if any) into the contralateral ventricle. Pharmacological agents infused at a constant known rate into the contralateral ventricle were tested for their effect on CSF formation in real-time. Results The measured rate of CSF formation was increased by blockade of the Sylvian aqueduct but was not changed by increasing the outflow pressure (0–3 cm of H2O). In male Wistar rats, CSF formation was age-dependent: 0.39±0.06, 0.74±0.05, 1.02±0.04 and 1.40±0.06 µL/min at 8, 9, 10 and 12 weeks, respectively. CSF formation was reduced 57% by intraventricular infusion of the carbonic anhydrase inhibitor, acetazolamide. Comparison with existing methods Tracer dilution methods do not permit ongoing real-time determination of the rate of CSF formation, are not readily amenable to pharmacological manipulations, and require critical assumptions. Direct measurement of CSF formation overcomes these limitations. Conclusions Direct measurement of CSF formation in rats is feasible. Our method should prove useful for studying CSF dynamics in normal physiology and disease models. PMID:25554415

  15. Applicability of optical scanner method for fine root dynamics

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi

    2016-04-01

    Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined

  16. A Dynamic Poisson-Boltzmann Method of Simulating Polypeptides

    NASA Astrophysics Data System (ADS)

    Campbell, Victoria S.; Grayce, Christopher J.

    1998-03-01

    We present a method of performing molecular dynamics simulations of charged polymeric species in solution such as polypeptides that takes into account the instantaneous response of the ionic atmosphere to fluctuations in polymer conformation without employing explicit solvent and salt ions. Using density functional theory we write the free energy of the ionic atmosphere around the polymer as a functional of its density in the linearized Poisson-Boltzmann limit. We then add to a normal MD simulation of a charged polymer extra degrees of freedom, namely the parameters describing the instantaneous ion atomosphere density. These parameters vary dynamically under the influence of the coupled mechanical and thermodynamic forces, so that the instantaneous variations in the ionic atmosphere as the polymer conformation fluctuates are described. Using this method MD simulations were carried out on a model polypeptide system and both conformational properties as well as the electric field generated by this method were compared to results obtained by using fixed Debye-Huckel potentials.

  17. Coupled-cluster methods for core-hole dynamics

    NASA Astrophysics Data System (ADS)

    Picon, Antonio; Cheng, Lan; Hammond, Jeff R.; Stanton, John F.; Southworth, Stephen H.

    2014-05-01

    Coupled cluster (CC) is a powerful numerical method used in quantum chemistry in order to take into account electron correlation with high accuracy and size consistency. In the CC framework, excited, ionized, and electron-attached states can be described by the equation of motion (EOM) CC technique. However, bringing CC methods to describe molecular dynamics induced by x rays is challenging. X rays have the special feature of interacting with core-shell electrons that are close to the nucleus. Core-shell electrons can be ionized or excited to a valence shell, leaving a core-hole that will decay very fast (e.g. 2.4 fs for K-shell of Ne) by emitting photons (fluorescence process) or electrons (Auger process). Both processes are a clear manifestation of a many-body effect, involving electrons in the continuum in the case of Auger processes. We review our progress of developing EOM-CC methods for core-hole dynamics. Results of the calculations will be compared with measurements on core-hole decays in atomic Xe and molecular XeF2. This work is funded by the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy, under Contract No. DE-AC02-06CH11357.

  18. A dynamic calibration method for the pressure transducer

    NASA Astrophysics Data System (ADS)

    Wang, Zhongyu; Wang, Zhuoran; Li, Qiang

    2016-01-01

    Pressure transducer is widely used in the field of industry. A calibrated pressure transducer can increase the performance of precision instruments in the closed mechanical relationship. Calibration is the key to ensure the pressure transducer with a high precision and dynamic characteristic. Unfortunately, the current calibration method can usually be used in the laboratory with a good condition and only one pressure transducer can be calibrated at each time. Therefore the calibration efficiency is hard to meet the requirement of modern industry with high efficiency. A dynamic and fast calibration technology with a calibration device and a corresponding data processing method is proposed in this paper. Firstly, the pressure transducer to be calibrated is placed in the small cavity chamber. The calibration process only contains a single loop. The outputs of each calibrated transducer are recorded automatically by the control terminal. Secondly, LabView programming is used for the information acquisition and data processing. The performance of the repeatability and nonlinear indicators can be figured out directly. At last the pressure transducers are calibrated simultaneously in the experiment to verify the suggested calibration technology. The experimental result shows this method can be used to calibrate the pressure transducer in the practical engineering measurement.

  19. Reducing the likelihood of long tennis matches.

    PubMed

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  20. Dimension-independent likelihood-informed MCMC

    SciTech Connect

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  1. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  2. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  3. Dimension-independent likelihood-informed MCMC

    SciTech Connect

    Cui, Tiangang; Law, Kody J.H.; Marzouk, Youssef M.

    2016-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  4. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  5. Efficient local-orbitals based method for ultrafast dynamics

    NASA Astrophysics Data System (ADS)

    Boleininger, Max; Horsfield, Andrew P.

    2017-07-01

    Computer simulations are invaluable for the study of ultrafast phenomena, as they allow us to directly access the electron dynamics. We present an efficient method for simulating the evolution of electrons in molecules under the influence of time-dependent electric fields, based on the Gaussian tight binding model. This model improves upon standard self-charge-consistent tight binding by the inclusion of polarizable orbitals and a self-consistent description of charge multipoles. Using the examples of bithiophene, terthiophene, and tetrathiophene, we show that this model produces electrostatic, electrodynamic, and explicitly time-dependent properties in strong agreement with density-functional theory, but at a small fraction of the cost.

  6. Quantum dynamics by the constrained adiabatic trajectory method

    SciTech Connect

    Leclerc, A.; Jolicard, G.; Guerin, S.; Killingbeck, J. P.

    2011-03-15

    We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are explored through simple examples.

  7. A method for the evaluation of wide dynamic range cameras

    NASA Astrophysics Data System (ADS)

    Wong, Ping Wah; Lu, Yu Hua

    2012-01-01

    We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.

  8. Fast inference in generalized linear models via expected log-likelihoods.

    PubMed

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  9. Study on Method of Wide Dynamic Range Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Hu, X.; Teng, Y.

    2013-12-01

    Seismic data acquisition system is an indispensable device for seismic signal digitalization processing. Its performance is directly related to the final seismic signal acquisited, and ultimately affect the results of the data processing. The amplitude of seismic signal has a great span, its dynamic range even reached more than 160dB. And the dynamic range of the output signal of broadband seismometer is greater than 150dB too. Yet the dynamic range of 24-bit DAS(Data Acquisition System) which is currently widespread used and based on Σ-ΔA/D converter is only about 130dB. This lead to that the small seismic signal can't be recorded by 24-bit DAS as well as the amplitude of big seismic event wave recorded by it would be limited. For instance, since the 2008 Wenchuan Ms8.0 Earthquake is a huge seismic event, the amplitudes of seismic wave recorded by all the 24-bit seismometors in Sichuan Province in China are seriously limited. It makes the earthquake monitoring station lost its function when we seriously need the data, and we lost the rare huge seismic event wave data for late studying. It is the requirement for the DAS in practical application that for a small seismic signal recorded, it needed to improve the signal-to-noise ratio and has a high resolution, and for a big one, it is demanded to record the signal perfectly and not to be limited of its amplitude. According to this, we present a new method of wide dynamic range data acquisition: The Analog-to-Digital Converter classifies the input signal amplitude into several levels; The smaller-amplitude-level input voltage signal is digitalized with higher resolution while lower resolution digitalized for the bigger-amplitude-level input; Every amplitude-level-signal can be digitalized by an independent ordinary 24-bit Σ-ΔA/D converter for its dynamic range is less smaller; And finally, the controller-processing unit make all the level signal digital outputs into a 32-bit data, which has high resolution and

  10. Visualization Methods to Quantify DNAPL Dynamics in Chemical Remediation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, X.; Jawitz, J. W.

    2006-12-01

    A novel multiple-wavelength visualization method is under development for quantifying multiphase fluid dynamics in porous media. This technique is applied here for in situ characterization of laboratory-scale DNAPL chemical remediation, including co-solvent flushing and surfactant flushing. Development of this method is motivated by the limitations of current quantitative imaging methods. In the method both light adsorption (Beer's Law) and interfacial diffraction (Fresnel's Law) are considered. Furthermore, the use of multiple wavelengths introduces the ability to eliminate the interface structure effect. By using images taken at two wavelengths using band-pass filters, the heterogeneous DNAPL saturation distribution in a two- dimensional laboratory chamber can be quantified at any time during chemical remediation. Previously published DNAPL visualization techniques have been shown to be some accurate for post-spill conditions, but are ineffective once significant dissolution has occurred. The method introduced here is shown to achieve mass balances of 90% and greater even during chemical remediation. Furthermore, the heterogeneous saturation distribution in the chamber (i.e. Eulerian description) and the distribution over stream tubes (i.e. Lagrangian description) are quantified using the new method and shown to be superior to those obtained using the binary imaging technique.

  11. Multiscale molecular dynamics using the matched interface and boundary method

    SciTech Connect

    Geng Weihua; Wei, G.W.

    2011-01-20

    The Poisson-Boltzmann (PB) equation is an established multiscale model for electrostatic analysis of biomolecules and other dielectric systems. PB based molecular dynamics (MD) approach has a potential to tackle large biological systems. Obstacles that hinder the current development of PB based MD methods are concerns in accuracy, stability, efficiency and reliability. The presence of complex solvent-solute interface, geometric singularities and charge singularities leads to challenges in the numerical solution of the PB equation and electrostatic force evaluation in PB based MD methods. Recently, the matched interface and boundary (MIB) method has been utilized to develop the first second order accurate PB solver that is numerically stable in dealing with discontinuous dielectric coefficients, complex geometric singularities and singular source charges. The present work develops the PB based MD approach using the MIB method. New formulation of electrostatic forces is derived to allow the use of sharp molecular surfaces. Accurate reaction field forces are obtained by directly differentiating the electrostatic potential. Dielectric boundary forces are evaluated at the solvent-solute interface using an accurate Cartesian-grid surface integration method. The electrostatic forces located at reentrant surfaces are appropriately assigned to related atoms. Extensive numerical tests are carried out to validate the accuracy and stability of the present electrostatic force calculation. The new PB based MD method is implemented in conjunction with the AMBER package. MIB based MD simulations of biomolecules are demonstrated via a few example systems.

  12. Multiscale molecular dynamics using the matched interface and boundary method

    PubMed Central

    Geng, Weihua; Wei, G.W.

    2010-01-01

    The Poisson-Boltzmann (PB) equation is an established multiscale model for electrostatic analysis of biomolecules and other dielectric systems. PB based molecular dynamics (MD) approach has a potential to tackle large biological systems. Obstacles that hinder the current development of PB based MD methods are concerns in accuracy, stability, efficiency and reliability. The presence of complex solvent-solute interface, geometric singularities and charge singularities leads to challenges in the numerical solution of the PB equation and electrostatic force evaluation in PB based MD methods. Recently, the matched interface and boundary (MIB) method has been utilized to develop the first second order accurate PB solver that is numerically stable in dealing with discontinuous dielectric coefficients, complex geometric singularities and singular source charges. The present work develops the PB based MD approach using the MIB method. New formulation of electrostatic forces is derived to allow the use of sharp molecular surfaces. Accurate reaction field forces are obtained by directly differentiating the electrostatic potential. Dielectric boundary forces are evaluated at the solvent-solute interface using an accurate Cartesian-grid surface integration method. The electrostatic forces located at reentrant surfaces are appropriately assigned to related atoms. Extensive numerical tests are carried out to validate the accuracy and stability of the present electrostatic force calculation. The new PB based MD method is implemented in conjunction with the AMBER package. MIB based MD simulations of biomolecules are demonstrated via a few example systems. PMID:21088761

  13. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  14. Predicting the likelihood of purchase by big data

    NASA Astrophysics Data System (ADS)

    Zhao, P. Y.; Shi, Y. M.

    2017-04-01

    Big data has changed our way of life and learning, for example, information extraction and information analysis methods have been radically changed, which we usually classify into data mining. Big data analytics is used to find the possibilities for consumers to purchase the specific product. In this paper, we constructed models for estimating existing products and predicting the likelihood of purchasing new products. And the results indicated that our methods is feasible and effective.

  15. Photobleaching Methods to Study Golgi Complex Dynamics in Living Cells

    PubMed Central

    Snapp, Erik Lee

    2014-01-01

    The Golgi complex (GC) is a highly dynamic organelle that constantly receives and exports proteins and lipids from both the endoplasmic reticulum and the plasma membrane. While protein trafficking can be monitored with traditional biochemical methods, these approaches average the behaviors of millions of cells, provide modest temporal information and no spatial information. Photobleaching methods enable investigators to monitor protein trafficking in single cells or even single GC stacks with subsecond precision. Furthermore, photobleaching can be exploited to monitor the behaviors of resident GC proteins to provide insight into mechanisms of retention and trafficking. In this chapter, general photobleaching approaches with laser scanning confocal microscopes are described. Importantly, the problems associated with many fluorescent proteins (FPs) and their uses in the secretory pathway are discussed and appropriate choices are suggested. For example, Enhanced Green Fluorescent Protein (EGFP) and most red FPs are extremely problematic. Finally, options for data analyses are described. PMID:24295308

  16. An automated dynamic water vapor permeation test method

    NASA Astrophysics Data System (ADS)

    Gibson, Phillip; Kendrick, Cyrus; Rivin, Donald; Charmchii, Majid; Sicuranza, Linda

    1995-05-01

    This report describes an automated apparatus developed to measure the transport of water vapor through materials under a variety of conditions. The apparatus is more convenient to use than the traditional test methods for textiles and clothing materials, and allows one to use a wider variety of test conditions to investigate the concentration-dependent and nonlinear transport behavior of many of the semipermeable membrane laminates which are now available. The dynamic moisture permeation cell (DMPC) has been automated to permit multiple setpoint testing under computer control, and to facilitate investigation of transient phenomena. Results generated with the DMPC are in agreement with and of comparable accuracy to those from the ISO 11092 (sweating guarded hot plate) method of measuring water vapor permeability.

  17. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  18. A spatiotemporal characterization method for the dynamic cytoskeleton

    PubMed Central

    Alhussein, Ghada; Shanti, Aya; Farhat, Ilyas A. H.; Timraz, Sara B. H.; Alwahab, Noaf S. A.; Pearson, Yanthe E.; Martin, Matthew N.; Christoforou, Nicolas

    2016-01-01

    The significant gap between quantitative and qualitative understanding of cytoskeletal function is a pressing problem; microscopy and labeling techniques have improved qualitative investigations of localized cytoskeleton behavior, whereas quantitative analyses of whole cell cytoskeleton networks remain challenging. Here we present a method that accurately quantifies cytoskeleton dynamics. Our approach digitally subdivides cytoskeleton images using interrogation windows, within which box‐counting is used to infer a fractal dimension (D f) to characterize spatial arrangement, and gray value intensity (GVI) to determine actin density. A partitioning algorithm further obtains cytoskeleton characteristics from the perinuclear, cytosolic, and periphery cellular regions. We validated our measurement approach on Cytochalasin‐treated cells using transgenically modified dermal fibroblast cells expressing fluorescent actin cytoskeletons. This method differentiates between normal and chemically disrupted actin networks, and quantifies rates of cytoskeletal degradation. Furthermore, GVI distributions were found to be inversely proportional to D f, having several biophysical implications for cytoskeleton formation/degradation. We additionally demonstrated detection sensitivity of differences in D f and GVI for cells seeded on substrates with varying degrees of stiffness, and coated with different attachment proteins. This general approach can be further implemented to gain insights on dynamic growth, disruption, and structure of the cytoskeleton (and other complex biological morphology) due to biological, chemical, or physical stimuli. © 2016 The Authors. Cytoskeleton Published by Wiley Periodicals, Inc. PMID:27015595

  19. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  20. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes

    PubMed Central

    Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.

    2014-01-01

    Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062

  1. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes.

    PubMed

    Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B; Kosorok, Michael R

    Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation.

  2. Sensitivity based method for structural dynamic model improvement

    NASA Astrophysics Data System (ADS)

    Lin, R. M.; Du, H.; Ong, J. H.

    1993-05-01

    Sensitivity analysis, the study of how a structure's dynamic characteristics change with design variables, has been used to predict structural modification effects in design for many decades. In this paper, methods for calculating the eigensensitivity, frequency response function sensitivity and its modified new formulation are presented. The implementation of these sensitivity analyses to the practice of finite element model improvement using vibration test data, which is one of the major applications of experimental modal testing, is discussed. Since it is very difficult in practice to measure all the coordinates which are specified in the finite element model, sensitivity based methods become essential and are, in fact, the only appropriate methods of tackling the problem of finite element model improvement. Comparisons of these methods are made in terms of the amount of measured data required, the speed of convergence and the magnitudes of modelling errors. Also, it is identified that the inverse iteration technique can be effectively used to minimize the computational costs involved. The finite element model of a plane truss structure is used in numerical case studies to demonstrate the effectiveness of the applications of these sensitivity based methods to practical engineering structures.

  3. Sampling variability in forensic likelihood-ratio computation: A simulation study.

    PubMed

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond; Meuwly, Didier

    2015-12-01

    Recently, in the forensic biometric community, there is a growing interest to compute a metric called "likelihood-ratio" when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biometric recognition system outputs a score and therefore a likelihood-ratio computation method is used to convert the score to a likelihood-ratio. The likelihood-ratio is the probability of the score given the hypothesis of the prosecution, Hp (the two biometric specimens arose from a same source), divided by the probability of the score given the hypothesis of the defense, Hd (the two biometric specimens arose from different sources). Given a set of training scores under Hp and a set of training scores under Hd, several methods exist to convert a score to a likelihood-ratio. In this work, we focus on the issue of sampling variability in the training sets and carry out a detailed empirical study to quantify its effect on commonly proposed likelihood-ratio computation methods. We study the effect of the sampling variability varying: 1) the shapes of the probability density functions which model the distributions of scores in the two training sets; 2) the sizes of the training sets and 3) the score for which a likelihood-ratio is computed. For this purpose, we introduce a simulation framework which can be used to study several properties of a likelihood-ratio computation method and to quantify the effect of sampling variability in the likelihood-ratio computation. It is empirically shown that the sampling variability can be considerable, particularly when the training sets are small. Furthermore, a given method of likelihood-ratio computation can behave very differently for different shapes of the probability density functions of the scores in the training sets and different scores for which likelihood-ratios are computed. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Gao, N.; Yang, L.; Gao, F.; Kurtz, R. J.; West, D.; Zhang, S.

    2017-04-01

    A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.

  5. Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method.

    PubMed

    Gao, N; Yang, L; Gao, F; Kurtz, R J; West, D; Zhang, S

    2017-04-12

    A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.

  6. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).

    PubMed

    Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.

  7. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  8. Partial order optimum likelihood (POOL): maximum likelihood prediction of protein active site residues using 3D Structure and sequence properties.

    PubMed

    Tong, Wenxu; Wei, Ying; Murga, Leonel F; Ondrechen, Mary Jo; Williams, Ronald J

    2009-01-01

    A new monotonicity-constrained maximum likelihood approach, called Partial Order Optimum Likelihood (POOL), is presented and applied to the problem of functional site prediction in protein 3D structures, an important current challenge in genomics. The input consists of electrostatic and geometric properties derived from the 3D structure of the query protein alone. Sequence-based conservation information, where available, may also be incorporated. Electrostatics features from THEMATICS are combined with multidimensional isotonic regression to form maximum likelihood estimates of probabilities that specific residues belong to an active site. This allows likelihood ranking of all ionizable residues in a given protein based on THEMATICS features. The corresponding ROC curves and statistical significance tests demonstrate that this method outperforms prior THEMATICS-based methods, which in turn have been shown previously to outperform other 3D-structure-based methods for identifying active site residues. Then it is shown that the addition of one simple geometric property, the size rank of the cleft in which a given residue is contained, yields improved performance. Extension of the method to include predictions of non-ionizable residues is achieved through the introduction of environment variables. This extension results in even better performance than THEMATICS alone and constitutes to date the best functional site predictor based on 3D structure only, achieving nearly the same level of performance as methods that use both 3D structure and sequence alignment data. Finally, the method also easily incorporates such sequence alignment data, and when this information is included, the resulting method is shown to outperform the best current methods using any combination of sequence alignments and 3D structures. Included is an analysis demonstrating that when THEMATICS features, cleft size rank, and alignment-based conservation scores are used individually or in combination

  9. Collaborative Double Robust Targeted Maximum Likelihood Estimation*

    PubMed Central

    van der Laan, Mark J.; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum

  10. Efficient sensitivity analysis method for chaotic dynamical systems

    SciTech Connect

    Liao, Haitao

    2016-05-15

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  11. Dynamic characterization of satellite components through non-invasive methods

    SciTech Connect

    Mullins, Joshua G; Wiest, Heather K; Mascarenas, David D. L.; Macknelly, David

    2010-10-21

    The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. This harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as a replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modelling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.

  12. Dynamic characterization of satellite components through non-invasive methods

    SciTech Connect

    Mullens, Joshua G; Wiest, Heather K; Mascarenas, David D; Park, Gyuhae

    2011-01-24

    The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. The harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modeling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.

  13. An implicit finite element method for discrete dynamic fracture

    SciTech Connect

    Gerken, Jobie M.

    1999-12-01

    A method for modeling the discrete fracture of two-dimensional linear elastic structures with a distribution of small cracks subject to dynamic conditions has been developed. The foundation for this numerical model is a plane element formulated from the Hu-Washizu energy principle. The distribution of small cracks is incorporated into the numerical model by including a small crack at each element interface. The additional strain field in an element adjacent to this crack is treated as an externally applied strain field in the Hu-Washizu energy principle. The resulting stiffness matrix is that of a standard plane element. The resulting load vector is that of a standard plane element with an additional term that includes the externally applied strain field. Except for the crack strain field equations, all terms of the stiffness matrix and load vector are integrated symbolically in Maple V so that fully integrated plane stress and plane strain elements are constructed. The crack strain field equations are integrated numerically. The modeling of dynamic behavior of simple structures was demonstrated within acceptable engineering accuracy. In the model of axial and transverse vibration of a beam and the breathing mode of vibration of a thin ring, the dynamic characteristics were shown to be within expected limits. The models dominated by tensile forces (the axially loaded beam and the pressurized ring) were within 0.5% of the theoretical values while the shear dominated model (the transversely loaded beam) is within 5% of the calculated theoretical value. The constant strain field of the tensile problems can be modeled exactly by the numerical model. The numerical results should therefore, be exact. The discrepancies can be accounted for by errors in the calculation of frequency from the numerical results. The linear strain field of the transverse model must be modeled by a series of constant strain elements. This is an approximation to the true strain field, so some

  14. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  15. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  16. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  17. Dynamic spectrum management as an anti-interference method

    NASA Astrophysics Data System (ADS)

    Suchanski, M.; Matyszkiel, R.; Kaniewski, P.; Kustra, M.; Gajewski, P.; Łopatka, J.

    2017-04-01

    The rapid development of systems which use wireless technologies causes the escalation of the spectrum shortage problem. That is why the concept of coordinated dynamic spectrum management has appeared. It improves efficiency of the spectrum use and makes radio communication systems more resistant to interferences. The main element of the proposed solution is a frequency broker which provides frequency assignment plans for radio networks. Direct communication between the frequency broker and radio networks enables fast change of radio data. This paper presents an anti-interference method based on dynamic spectrum management, which is an implementation and extension of IST- 104/RTG-050 research task group "Cognitive Radio in NATO II" recommendations. The solution presented by the authors was tested in emulated and real environment. The designed algorithm of frequency assignment can effectively prevent interferences (including jamming) thanks to the implementation of monitoring mechanisms capable of obtaining information about the current state of radio channels from radio networks. Directions for further development of frequency planning algorithm based on belief distorted Nash equilibria for deterministic or stochastic beliefs were proposed.

  18. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  19. Space station static and dynamic analyses using parallel methods

    NASA Technical Reports Server (NTRS)

    Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S.

    1993-01-01

    Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future.

  20. Study on the measurement method of a dynamic spectrum

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, G.; Lin, L.; Liu, Y. L.; Li, X. X.; C-Y Lu, Stephen

    2005-01-01

    Continuous non-invasive blood component sensing and regulation is necessary for patients with metabolism disorders. Utilizing near-infrared spectroscopy for non-invasively sensing blood component concentration has been a focus topic in biomedical optics applications. It has been shown to be versatile, speedy and sensitive to several kinds of samples. However, there is no report about any successful non-invasive blood component (except the artery blood oxygen saturation) concentration detection techniques that can meet the requirements of clinic application. One of the key difficulties is the influence of individual discrepancies. Dynamic spectrum is a new non-invasive measure method for sensing blood component concentration presented recently. It can theoretically eliminate the individual discrepancies of the tissues except the pulsatile component of the artery blood. This indicates a brand new way to measure the blood component concentration and the potential to provide absolute quantitation of hemodynamic variables. In this paper, the measurement methodology to acquire the DS from photoplethysmography (PPG) is studied. A dynamic spectrometer to acquire the DS is described.

  1. Measuring methods for evaluation of dynamic tyre properties

    NASA Astrophysics Data System (ADS)

    Kmoch, Klaus

    1992-01-01

    Extensive measuring methods for macroscopic assessment of tire properties, based on classical mechanics and dynamics, are presented. Theoretical results and measurements were included in an expert system, where the pneumatic tire is represented as a wheel with particular elastic properties. For geometry measurement of the tire surface, a laser scanner test bed was used. The tire was excited with a shaker in order to obtain acceleration signals and for estimating global parameters such as stiffness, damping, and nonlinearity influence, which is found to increase with excitation force. Tire dynamic behavior was examined by low velocities with microscopy and infrared thermography, in order to quantify temperature augmentation and tangential and normal forces in the contact area; the slip stick oscillations were recorded on microphones. A drum test bed was used for studying tire behavior at high velocities and the tire vehicle interaction was established with acceleration measurements; nonuniformity influence on rolling stability was ascertained. The results were compared with data from theoretical models, which are pinpoint mass systems or multiple bodies problems.

  2. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  3. Space station static and dynamic analyses using parallel methods

    NASA Technical Reports Server (NTRS)

    Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S.

    1993-01-01

    Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future.

  4. A dynamically adjusted mixed emphasis method for building boosting ensembles.

    PubMed

    Gomez-Verdejo, Vanessa; Arenas-Garcia, Jerónimo; Figueiras-Vidal, Aníbal R

    2008-01-01

    Progressively emphasizing samples that are difficult to classify correctly is the base for the recognized high performance of real Adaboost (RA) ensembles. The corresponding emphasis function can be written as a product of a factor that measures the quadratic error and a factor related to the proximity to the classification border; this fact opens the door to explore the potential advantages provided by using adjustable combined forms of these factors. In this paper, we introduce a principled procedure to select the combination parameter each time a new learner is added to the ensemble, just by maximizing the associated edge parameter, calling the resulting method the dynamically adapted weighted emphasis RA (DW-RA). A number of application examples illustrates the performance improvements obtained by DW-RA.

  5. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, Jon D.

    1990-01-01

    Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.

  6. Implementing efficient dynamic formal verification methods for MPI programs.

    SciTech Connect

    Vakkalanka, S.; DeLisi, M.; Gopalakrishnan, G.; Kirby, R. M.; Thakur, R.; Gropp, W.; Mathematics and Computer Science; Univ. of Utah; Univ. of Illinois

    2008-01-01

    We examine the problem of formally verifying MPI programs for safety properties through an efficient dynamic (runtime) method in which the processes of a given MPI program are executed under the control of an interleaving scheduler. To ensure full coverage for given input test data, the algorithm must take into consideration MPI's out-of-order completion semantics. The algorithm must also ensure that nondeterministic constructs (e.g., MPI wildcard receive matches) are executed in all possible ways. Our new algorithm rewrites wildcard receives to specific receives, one for each sender that can potentially match with the receive. It then recursively explores each case of the specific receives. The list of potential senders matching a receive is determined through a runtime algorithm that exploits MPI's operation ordering semantics. Our verification tool ISP that incorporates this algorithm efficiently verifies several programs and finds bugs missed by existing informal verification tools.

  7. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1988-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  8. BDF-like methods for nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Dong, S.

    2010-04-01

    We present several time integration algorithms of second-order accuracy that are numerically simple and effective for nonlinear elastodynamic problems. These algorithms are based on a general four-step scheme that has a resemblance to the backward differentiation formulas. We also present an extension to the composite strategy of the Bathe method. Appropriate values for the algorithmic parameters are determined based on considerations of stability and dissipativity, and less dissipative members of each algorithm have been identified. We demonstrate the convergence characteristics of the proposed algorithms with a nonlinear dynamic problem having analytic solutions, and test these algorithms with several three-dimensional nonlinear elastodynamic problems involving large deformations and rotations, employing St. Venant-Kirchhoff and compressible Neo-Hookean hyperelastic material models. These tests show that stable computations are obtained with the proposed algorithms in nonlinear situations where the trapezoidal rule encounters a well-known instability.

  9. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  10. Dynamically controlled crystallization method and apparatus and crystals obtained thereby

    NASA Technical Reports Server (NTRS)

    Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)

    1999-01-01

    A method and apparatus for dynamically controlling the crystallization of proteins including a crystallization chamber or chambers for holding a protein in a salt solution, one or more salt solution chambers, two communication passages respectively coupling the crystallization chamber with each of the salt solution chambers, and transfer mechanisms configured to respectively transfer salt solution between each of the salt solution chambers and the crystallization chamber. The transfer mechanisms are interlocked to maintain the volume of salt solution in the crystallization chamber substantially constant. Salt solution of different concentrations is transferred into and out of the crystallization chamber to adjust the salt concentration in the crystallization chamber to achieve precise control of the crystallization process.

  11. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons.

  12. Power series expansion method in tensor-optimized antisymmetrized molecular dynamics beyond the Jastrow correlation method

    NASA Astrophysics Data System (ADS)

    Myo, Takayuki; Toki, Hiroshi; Ikeda, Kiyomi; Horiuchi, Hisashi; Suhara, Tadahiro

    2017-09-01

    We developed a new variational method for tensor-optimized antisymmetrized molecular dynamics (TOAMD) for nuclei. In TOAMD, the correlation functions for the tensor force and the short-range repulsion are introduced and used in the power series form of the wave function, which is different from the Jastrow method. Here, nucleon pairs are correlated in multisteps with different forms, while they are correlated only once including all pairs in the Jastrow correlation method. Each correlation function in every term is independently optimized in the variation of total energy in TOAMD. For s -shell nuclei using the nucleon-nucleon interaction, the energies in TOAMD are better than those in the variational Monte Carlo method with the Jastrow correlation function. This means that the power series expansion using the correlation functions in TOAMD describes the nuclei better than the Jastrow correlation method.

  13. Testing and Validation of the Dynamic Inertia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  14. The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase

    SciTech Connect

    Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako

    2007-03-22

    Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the active site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.

  15. A Dynamic Integration Method for Borderland Database using OSM data

    NASA Astrophysics Data System (ADS)

    Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.

    2013-11-01

    Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.

  16. Introduction to finite-difference methods for numerical fluid dynamics

    SciTech Connect

    Scannapieco, E.; Harlow, F.H.

    1995-09-01

    This work is intended to be a beginner`s exercise book for the study of basic finite-difference techniques in computational fluid dynamics. It is written for a student level ranging from high-school senior to university senior. Equations are derived from basic principles using algebra. Some discussion of partial-differential equations is included, but knowledge of calculus is not essential. The student is expected, however, to have some familiarity with the FORTRAN computer language, as the syntax of the computer codes themselves is not discussed. Topics examined in this work include: one-dimensional heat flow, one-dimensional compressible fluid flow, two-dimensional compressible fluid flow, and two-dimensional incompressible fluid flow with additions of the equations of heat flow and the {Kappa}-{epsilon} model for turbulence transport. Emphasis is placed on numerical instabilities and methods by which they can be avoided, techniques that can be used to evaluate the accuracy of finite-difference approximations, and the writing of the finite-difference codes themselves. Concepts introduced in this work include: flux and conservation, implicit and explicit methods, Lagrangian and Eulerian methods, shocks and rarefactions, donor-cell and cell-centered advective fluxes, compressible and incompressible fluids, the Boussinesq approximation for heat flow, Cartesian tensor notation, the Boussinesq approximation for the Reynolds stress tensor, and the modeling of transport equations. A glossary is provided which defines these and other terms.

  17. Data assimilation in problems of mantle dynamics: Methods and applications

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Schubert, G.; Tsepelev, I.; Korotkii, A.

    2009-05-01

    We present and compare several methods (backward advection, adjoint, and quasi-reversibility) for assimilation of geophysical and geodetic data in geodynamical models. These methods allow for incorporating observations and unknown initial conditions for mantle temperature and flow into a three- dimensional dynamic model in order to determine the initial conditions in the geological past. Once the conditions are determined the evolution of mantle structures can be restore. Using the quasi-reversibility method we reconstruct the evolution of the descending lithospheric slab beneath the south-eastern Carpathians. We show that the geometry of the mantle structures changes with time diminishing the degree of surface curvature of the structures, because the heat diffusion tends to smooth the complex thermal surfaces of mantle bodies with time. Present seismic tomography images of mantle structures do not allow definition of the sharp shapes of these structures in the past. Assimilation of mantle temperature and flow instead provides a quantitative tool to restore thermal shapes of prominent structures in the past from their diffusive shapes at present.

  18. Fast method for dynamic thresholding in volume holographic memories

    NASA Astrophysics Data System (ADS)

    Porter, Michael S.; Mitkas, Pericles A.

    1998-11-01

    It is essential for parallel optical memory interfaces to incorporate processing that dynamically differentiates between databit values. These thresholding points will vary as a result of system noise -- due to contrast fluctuations, variations in data page composition, reference beam misalignment, etc. To maintain reasonable data integrity it is necessary to select the threshold close to its optimal level. In this paper, a neural network (NN) approach is proposed as a fast method of determining the threshold to meet the required transfer rate. The multi-layered perceptron network can be incorporated as part of a smart photodetector array (SPA). Other methods have suggested performing the operation by means of histogram or by use of statistical information. These approaches fail in that they unnecessarily switch to a 1-D paradigm. In this serial domain, global thresholding is pointless since sequence detection could be applied. The discussed approach is a parallel solution with less overhead than multi-rail encoding. As part of this method, a small set of values are designated as threshold determination data bits; these are interleaved with the information data bits and are used as inputs to the NN. The approach has been tested using both simulated data as well as data obtained from a volume holographic memory system. Results show convergence of the training and an ability to generalize upon untrained data for binary and multi-level gray scale datapage images. Methodologies are discussed for improving the performance by a proper training set selection.

  19. The effect of different graphical and numerical likelihood formats on perception of likelihood and choice.

    PubMed

    Oudhoff, Jurriaan P; Timmermans, Daniëlle R M

    2015-05-01

    Quantitative risk information plays an important role in decision making about health. This study focuses on commonly used numerical and graphical formats and examines their effect on perception of different likelihoods and choice preferences. An experimental study was conducted with 192 participants, who evaluated 2 sets of 4 lotteries. Numerical formats to describe likelihood varied systematically between participants (X%, X-in-100, or 1-in-X). The effect of graphic formats (bar charts, icon charts) was assessed as a within-subjects factor. Dependent measures included perceived likelihood, choice preferences about participating in the lottery, and processing times. Numerical likelihoods presented as 1-in-X were processed fastest and were perceived as conveying larger likelihoods than the X-in-100 and percentages formats (mean response times in seconds: 5.65 v. 7.31 and 6.50; mean rating on a 1-9 scale: 4.38 v. 3.30 and 3.31, respectively). The 1-in-X format also evoked a stronger willingness to participate in a lottery than the 2 other numerical formats. The effect of adding graphs on perceived likelihood was moderated by numerical aptitude. Graphs reduced ratings of perceived likelihood of participants with lower numeracy, while there was no overall effect for participants with higher numeracy. Perception of likelihood differs significantly depending on the numerical format used. The 1-in-X format yields higher perceived likelihoods and it appears to be the easiest format to interpret. Graphs primarily affect perception of likelihood of people with lower numerical aptitude. These effects should be taken into account when discussing medical risks with patients. © The Author(s) 2015.

  20. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  1. Efficient Bit-to-Symbol Likelihood Mappings

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  2. Numerical likelihood analysis of cosmic ray anisotropies

    SciTech Connect

    Carlos Hojvat et al.

    2003-07-02

    A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

  3. Substructure method in high-speed monorail dynamic problems

    NASA Astrophysics Data System (ADS)

    Ivanchenko, I. I.

    2008-12-01

    The study of actions of high-speed moving loads on bridges and elevated tracks remains a topical problem for transport. In the present study, we propose a new method for moving load analysis of elevated tracks (monorail structures or bridges), which permits studying the interaction between two strained objects consisting of rod systems and rigid bodies with viscoelastic links; one of these objects is the moving load (monorail rolling stock), and the other is the carrying structure (monorail elevated track or bridge). The methods for moving load analysis of structures were developed in numerous papers [1-15]. At the first stage, when solving the problem about a beam under the action of the simplest moving load such as a moving weight, two fundamental methods can be used; the same methods are realized for other structures and loads. The first method is based on the use of a generalized coordinate in the expansion of the deflection in the natural shapes of the beam, and the problem is reduced to solving a system of ordinary differential equations with variable coefficients [1-3]. In the second method, after the "beam-weight" system is decomposed, just as in the problem with the weight impact on the beam [4], solving the problem is reduced to solving an integral equation for the dynamic weight reaction [6, 7]. In [1-3], an increase in the number of retained forms leads to an increase in the order of the system of equations; in [6, 7], difficulties arise when solving the integral equations related to the conditional stability of the step procedures. The method proposed in [9, 14] for beams and rod systems combines the above approaches and eliminates their drawbacks, because it permits retaining any necessary number of shapes in the deflection expansion and has a resolving system of equations with an unconditionally stable integration scheme and with a minimum number of unknowns, just as in the method of integral equations [6, 7]. This method is further developed for

  4. The reversibility error method (REM): a new, dynamical fast indicator for planetary dynamics

    NASA Astrophysics Data System (ADS)

    Panichi, Federico; Goździewski, Krzyszof; Turchetti, Giorgio

    2017-06-01

    We describe the reversibility error method (REM) and its applications to planetary dynamics. REM is based on the time-reversibility analysis of the phase-space trajectories of conservative Hamiltonian systems. The round-off errors break the time reversibility and the displacement from the initial condition, occurring when we integrate it forward and backward for the same time interval, is related to the dynamical character of the trajectory. If the motion is chaotic, in the sense of non-zero maximal Lyapunov characteristic exponent (mLCE), then REM increases exponentially with time, as exp λt, while when the motion is regular (quasi-periodic), then REM increases as a power law in time, as tα, where α and λ are real coefficients. We compare the REM with a variant of mLCE, the mean exponential growth factor of nearby orbits. The test set includes the restricted three-body problem and five resonant planetary systems: HD 37124, Kepler-60, Kepler-36, Kepler-29 and Kepler-26. We found a very good agreement between the outcomes of these algorithms. Moreover, the numerical implementation of REM is astonishing simple, and is based on solid theoretical background. The REM requires only a symplectic and time-reversible (symmetric) integrator of the equations of motion. This method is also CPU efficient. It may be particularly useful for the dynamical analysis of multiple planetary systems in the Kepler sample, characterized by low-eccentricity orbits and relatively weak mutual interactions. As an interesting side result, we found a possible stable chaos occurrence in the Kepler-29 planetary system.

  5. Methods for Optimal Output Prediction in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Kast, Steven Michael

    In a Computational Fluid Dynamics (CFD) simulation, not all data is of equal importance. Instead, the goal of the user is often to compute certain critical outputs - such as lift and drag - accurately. While in recent years CFD simulations have become routine, ensuring accuracy in these outputs is still surprisingly difficult. Unacceptable levels of output error arise even in industry-standard simulations, such as the steady flow around commercial aircraft. This problem is only exacerbated when simulating more complex, unsteady flows. In this thesis, we present a mesh adaptation strategy for unsteady problems that can automatically reduce errors in outputs of interest. This strategy applies to problems in which the computational domain deforms in time - such as flapping-flight simulations - and relies on an unsteady adjoint to identify regions of the mesh contributing most to the output error. This error is then driven down via refinement of the critical regions in both space and time. Here, we demonstrate this strategy on a series of flapping-wing problems in two and three dimensions, using high-order discontinuous Galerkin (DG) methods for both spatial and temporal discretizations. Compared to other methods, results indicate that this strategy can deliver a desired level of output accuracy with significant reductions in computational cost. After concluding our work on mesh adaptation, we take a step back and investigate another idea for obtaining output accuracy: adapting the numerical method itself. In particular, we show how the test space of discontinuous finite element methods can be "optimized" to achieve accuracy in certain outputs or regions. In this work, we compute test functions that ensure accuracy specifically along domain boundaries. These regions - which are vital to both scalar outputs (such as lift and drag) and distributions (such as pressure and skin friction) - are often the most important from an engineering standpoint.

  6. Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.

    PubMed

    Ramírez, C L; Martí, M A; Roitberg, A E

    2016-01-01

    One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost.

  7. An analytic method for identifying dynamically formed runaway stars

    NASA Astrophysics Data System (ADS)

    Ryu, Taeho; Leigh, Nathan W. C.; Perna, Rosalba

    2017-09-01

    In this paper, we study the three-body products (two single stars and a binary) of binary-binary (2+2) scattering interactions. This is done using a combination of analytic methods and numerical simulations of 2+2 scattering interactions, both in isolation and in a homogeneous background potential. We analytically derive a simple formula relating the angle between the velocity vectors of the two ejected single stars and the orbital separation of the remaining binary. We compare our analytic formulation to numerical scattering simulations and illustrate that the agreement is excellent, both in isolation and in a homogeneous background potential. Our results are ideally suited for application to the GAIA data base, which is expected to identify many hundred runaway stars. The analytic relation presented here has the potential to identify runaway stars formed dynamically with high confidence. Finally, by applying our method to the runaways AE Aur and μ Col, we illustrate that it can be used to constrain the history of the background potential, which was denser than the presently observed density in the case of the Trapezium cluster.

  8. A new tree code method for simulation of planetesimal dynamics

    NASA Astrophysics Data System (ADS)

    Richardson, D. C.

    1993-03-01

    A new tree code method for simulation of planetesimal dynamics is presented. A self-similarity argument is used to restrict the problem to a small patch of a ring of planetesimals at 1 AU from the sun. The code incorporates a sliding box model with periodic boundary conditions and surrounding ghost particles. The tree is self-repairing and exploits the flattened nature of Keplerian disks to maximize efficiency. The code uses a fourth-order force polynomial integration algorithm with individual particle time-steps. Collisions and mergers, which play an important role in planetesimal evolution, are treated in a comprehensive manner. In typical runs with a few hundred central particles, the tree code is approximately 2-3 times faster than a recent direct summation method and requires about 1 CPU day on a Sparc IPX workstation to simulate 100 yr of evolution. The average relative force error incurred in such runs is less than 0.2 per cent in magnitude. In general, the CPU time as a function of particle number varies in a way consistent with an O(N log N) algorithm. In order to take advantage of facilities available, the code was written in C in a Unix workstation environment. The unique aspects of the code are discussed in detail and the results of a number of performance tests - including a comparison with previous work - are presented.

  9. Dynamically controlled crystallization method and apparatus and crystals obtained thereby

    NASA Technical Reports Server (NTRS)

    Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)

    2003-01-01

    A method and apparatus for dynamically controlling the crystallization of molecules including a crystallization chamber (14) or chambers for holding molecules in a precipitant solution, one or more precipitant solution reservoirs (16, 18), communication passages (17, 19) respectively coupling the crystallization chamber(s) with each of the precipitant solution reservoirs, and transfer mechanisms (20, 21, 22, 24, 26, 28) configured to respectively transfer precipitant solution between each of the precipitant solution reservoirs and the crystallization chamber(s). The transfer mechanisms are interlocked to maintain a constant volume of precipitant solution in the crystallization chamber(s). Precipitant solutions of different concentrations are transferred into and out of the crystallization chamber(s) to adjust the concentration of precipitant in the crystallization chamber(s) to achieve precise control of the crystallization process. The method and apparatus can be used effectively to grow crystals under reduced gravity conditions such as microgravity conditions of space, and under conditions of reduced or enhanced effective gravity as induced by a powerful magnetic field.

  10. Recent developments in maximum likelihood estimation of MTMM models for categorical data.

    PubMed

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed.

  11. A computationally efficient spectral method for modeling core dynamics

    NASA Astrophysics Data System (ADS)

    Marti, P.; Calkins, M. A.; Julien, K.

    2016-08-01

    An efficient, spectral numerical method is presented for solving problems in a spherical shell geometry that employs spherical harmonics in the angular dimensions and Chebyshev polynomials in the radial direction. We exploit the three-term recurrence relation for Chebyshev polynomials that renders all matrices sparse in spectral space. This approach is significantly more efficient than the collocation approach and is generalizable to both the Galerkin and tau methodologies for enforcing boundary conditions. The sparsity of the matrices reduces the computational complexity of the linear solution of implicit-explicit time stepping schemes to O(N) operations, compared to O>(N2>) operations for a collocation method. The method is illustrated by considering several example problems of important dynamical processes in the Earth's liquid outer core. Results are presented from both fully nonlinear, time-dependent numerical simulations and eigenvalue problems arising from the investigation of the onset of convection and the inertial wave spectrum. We compare the explicit and implicit temporal discretization of the Coriolis force; the latter becomes computationally feasible given the sparsity of the differential operators. We find that implicit treatment of the Coriolis force allows for significantly larger time step sizes compared to explicit algorithms; for hydrodynamic and dynamo problems at an Ekman number of E=10-5, time step sizes can be increased by a factor of 3 to 16 times that of the explicit algorithm, depending on the order of the time stepping scheme. The implementation with explicit Coriolis force scales well to at least 2048 cores, while the implicit implementation scales to 512 cores.

  12. Use of historical information in a maximum-likelihood framework

    USGS Publications Warehouse

    Cohn, T.A.; Stedinger, J.R.

    1987-01-01

    This paper discusses flood-quantile estimators which can employ historical and paleoflood information, both when the magnitudes of historical flood peaks are known, and when only threshold-exceedance information is available. Maximum likelihood, quasi-maximum likelihood and curve fitting methods for simultaneous estimation of 1, 2 and 3 unknown parameters are examined. The information contained in a 100 yr record of historical observations, during which the flood perception threshold was near the 10 yr flood level (i.e., on average, one flood in ten is above the threshold and hence is recorded), is equivalent to roughly 43, 64 and 78 years of systematic record in terms of the improvement of the precision of 100 yr flood estimators when estimating 1, 2 and 3 parameters, respectively. With the perception threshold at the 100 yr flood level, the historical data was worth 13, 20 and 46 years of systematic data when estimating 1, 2 and 3 parameters, respectively. ?? 1987.

  13. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  14. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  15. Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.

    ERIC Educational Resources Information Center

    Lord, Frederic M.

    There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…

  16. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  17. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  18. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey.

    PubMed

    Peyre, Hugo; Leplège, Alain; Coste, Joël

    2011-03-01

    Missing items are common in quality of life (QoL) questionnaires and present a challenge for research in this field. It remains unclear which of the various methods proposed to deal with missing data performs best in this context. We compared personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques using various realistic simulation scenarios of item missingness in QoL questionnaires constructed within the framework of classical test theory. Samples of 300 and 1,000 subjects were randomly drawn from the 2003 INSEE Decennial Health Survey (of 23,018 subjects representative of the French population and having completed the SF-36) and various patterns of missing data were generated according to three different item non-response rates (3, 6, and 9%) and three types of missing data (Little and Rubin's "missing completely at random," "missing at random," and "missing not at random"). The missing data methods were evaluated in terms of accuracy and precision for the analysis of one descriptive and one association parameter for three different scales of the SF-36. For all item non-response rates and types of missing data, multiple imputation and full information maximum likelihood appeared superior to the personal mean score and especially to hot deck in terms of accuracy and precision; however, the use of personal mean score was associated with insignificant bias (relative bias <2%) in all studied situations. Whereas multiple imputation and full information maximum likelihood are confirmed as reference methods, the personal mean score appears nonetheless appropriate for dealing with items missing from completed SF-36 questionnaires in most situations of routine use. These results can reasonably be extended to other questionnaires constructed according to classical test theory.

  19. Safe semi-supervised learning based on weighted likelihood.

    PubMed

    Kawakita, Masanori; Takeuchi, Jun'ichi

    2014-05-01

    We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  1. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  2. PARTICLE-GAS DYNAMICS WITH ATHENA: METHOD AND CONVERGENCE

    SciTech Connect

    Bai Xuening; Stone, James M. E-mail: jstone@astro.princeton.ed

    2010-10-15

    The Athena magnetohydrodynamics code has been extended to integrate the motion of particles coupled with the gas via aerodynamic drag in order to study the dynamics of gas and solids in protoplanetary disks (PPDs) and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second-order predictor-corrector method. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to PPDs. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the nonlinear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length {eta}r for dimensionless stopping time {tau} {sub s} = 0.1) and an equal number of particles and grid cells. On the other hand, particle clumping properties converge only at higher resolutions, and finer resolution leads to stronger clumping before convergence is reached. Finally, we find that the measurement of particle transport properties resulted from the streaming instability may be subject to error of about {+-}20%.

  3. On the feasibility of a transient dynamic design analysis method

    NASA Astrophysics Data System (ADS)

    Ohara, George J.; Cunniff, Patrick F.

    1992-04-01

    This Annual Report summarizes the progress that was made during the first year of the two-year grant from the Office of Naval Research. The dynamic behavior of structures subjected to mechanical shock loading provides a continuing problem for design engineers concerned with shipboard foundations supporting critical equipment. There are two particular problems associated with shock response that are currently under investigation. The first topic explores the possibilities of developing a transient design analysis method that does not degrade the current level of the Navy's shock-proofness requirements for heavy shipboard equipment. The second topic examines the prospects of developing scaling rules for the shock response of simple internal equipment of submarines subjected to various attack situations. This effort has been divided into two tasks: chemical explosive scaling for a given hull; and scaling of equipment response across different hull sizes. The computer is used as a surrogate shock machine for these studies. Hence, the results of the research can provide trends, ideas, suggestions, and scaling rules to the Navy. In using these results, the shock-hardening program should use measured data rather than calculated data.

  4. A Space/Time Dynamically Adaptive Method for Multiscale Problems

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle; Zikoski, Zachary; Paolucci, Samuel; Valorani, Mauro

    2011-11-01

    Systems of partial differential equations (PDEs) describing problems that are multiscale in space and time are computationally very expensive to solve. In order to overcome the challenges related to both thin spatial layers and temporal stiffness we propose the use of a wavelet adaptive multilevel representation (WAMR) in space and an adaptive model reduction method (G-Scheme) in time. The multilevel structure of the algorithm provides a simple way to adapt computational refinements to local demands of the solution. High resolution computations are performed only in spatial regions where sharp transitions occur, while the G-Scheme is an explicit solver developed for stiff problems which is built upon a local decomposition of the dynamics in three subspaces involving slow, active and fast time scales. Only the modes in the active subspace are integrated numerically, the others are approximated asymptotically. Subsequently, the original problem not only becomes substantially smaller, but more importantly non-stiff. Combining the WAMR technique with the G-Scheme yields a time accurate solution of a prescribed accuracy with a much smaller number of space- time degrees of freedom. While the computational scheme can be used to solve a wide class of stiff PDE problems, we will illustrate its use in the solution of the Navier Stokes equations in reactive flows.

  5. Method for increasing the dynamic range of mass spectrometers

    DOEpatents

    Belov, Mikhail; Smith, Richard D.; Udseth, Harold R.

    2004-09-07

    A method for enhancing the dynamic range of a mass spectrometer by first passing a sample of ions through the mass spectrometer having a quadrupole ion filter, whereupon the intensities of the mass spectrum of the sample are measured. From the mass spectrum, ions within this sample are then identified for subsequent ejection. As further sampling introduces more ions into the mass spectrometer, the appropriate rf voltages are applied to a quadrupole ion filter, thereby selectively ejecting the undesired ions previously identified. In this manner, the desired ions may be collected for longer periods of time in an ion trap, thus allowing better collection and subsequent analysis of the desired ions. The ion trap used for accumulation may be the same ion trap used for mass analysis, in which case the mass analysis is performed directly, or it may be an intermediate trap. In the case where collection is an intermediate trap, the desired ions are accumulated in the intermediate trap, and then transferred to a separate mass analyzer. The present invention finds particular utility where the mass analysis is performed in an ion trap mass spectrometer or a Fourier transform ion cyclotron resonance mass spectrometer.

  6. Particle-gas Dynamics with Athena: Method and Convergence

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Ning; Stone, James M.

    2010-10-01

    The Athena magnetohydrodynamics code has been extended to integrate the motion of particles coupled with the gas via aerodynamic drag in order to study the dynamics of gas and solids in protoplanetary disks (PPDs) and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second-order predictor-corrector method. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to PPDs. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the nonlinear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length ηr for dimensionless stopping time τ s = 0.1) and an equal number of particles and grid cells. On the other hand, particle clumping properties converge only at higher resolutions, and finer resolution leads to stronger clumping before convergence is reached. Finally, we find that the measurement of particle transport properties resulted from the streaming instability may be subject to error of about ±20%.

  7. A Subspace Method for Dynamical Estimation of Evoked Potentials

    PubMed Central

    Georgiadis, Stefanos D.; Ranta-aho, Perttu O.; Tarvainen, Mika P.; Karjalainen, Pasi A.

    2007-01-01

    It is a challenge in evoked potential (EP) analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing). We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements. PMID:18288257

  8. Confidence interval of the likelihood ratio associated with mixed stain DNA evidence.

    PubMed

    Beecham, Gary W; Weir, Bruce S

    2011-01-01

    Likelihood ratios are necessary to properly interpret mixed stain DNA evidence. They can flexibly consider alternate hypotheses and can account for population substructure. The likelihood ratio should be seen as an estimate and not a fixed value, because the calculations are functions of allelic frequency estimates that were estimated from a small portion of the population. Current methods do not account for uncertainty in the likelihood ratio estimates and are therefore an incomplete picture of the strength of the evidence. We propose the use of a confidence interval to report the consequent variation of likelihood ratios. The confidence interval is calculated using the standard forensic likelihood ratio formulae and a variance estimate derived using the Taylor expansion. The formula is explained, and a computer program has been made available. Numeric work shows that the evidential strength of DNA profiles decreases as the variation among populations increases.

  9. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C.

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  10. Likelihood-Based Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  11. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  12. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  13. An improved version of the Green's function molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Kong, Ling Ti; Denniston, Colin; Müser, Martin H.

    2011-02-01

    This work presents an improved version of the Green's function molecular dynamics method (Kong et al., 2009; Campañá and Müser, 2004 [1,2]), which enables one to study the elastic response of a three-dimensional solid to an external stress field by taking into consideration only atoms near the surface. In the previous implementation, the effective elastic coefficients measured at the Γ-point were altered to reduce finite size effects: their eigenvalues corresponding to the acoustic modes were set to zero. This scheme was found to work well for simple Bravais lattices as long as only atoms within the last layer were treated as Green's function atoms. However, it failed to function as expected in all other cases. It turns out that a violation of the acoustic sum rule for the effective elastic coefficients at Γ (Kong, 2010 [3]) was responsible for this behavior. In the new version, the acoustic sum rule is enforced by adopting an iterative procedure, which is found to be physically more meaningful than the previous one. In addition, the new algorithm allows one to treat lattices with bases and the Green's function slab is no longer confined to one layer. New version program summaryProgram title: FixGFC/FixGFMD v1.12 Catalogue identifier: AECW_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECW_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 206 436 No. of bytes in distributed program, including test data, etc.: 4 314 850 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Linux Has the code been vectorized or parallelized?: Yes. Code has been parallelized using MPI directives. RAM: Depends on the problem Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), MPI ( http

  14. Dynamic permeability of porous media by the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Adler, P.; Pazdniakou, A.

    2012-04-01

    The main objective of our work is to determine the dynamic permeability of three dimensional porous media by means of the Lattice Boltzmann method (LBM). The Navier-Stokes equation can be numerically solved by LBM which is widely used to address various fluid dynamics problems. Space is discretized by a three-dimensional cubic lattice and time is discretized as well. The generally accepted notation for lattice Boltzmann models is DdQq where D stands for space dimension and Q for the number of discrete velocities. The present model is denoted by D3Q19. Moreover, the Two Relaxation Times variant of the Multi Relaxation Times model is implemented. Bounce back boundary conditions are used on the solid-fluid interfaces. The porous medium is spatially periodic. Reconstructed media were used; they are obtained by imposing a porosity and a correlation function characterized by a correlation length. Real samples can be obtained by MicroCT. In contrast with other previous contributions, the dynamic permeability K(omega) which is a complex number, is derived by imposing an oscillating body force of pulsation omega on the unit cell and by deriving the amplitude and the phase shift of the resulting time dependent seepage velocity. The influence of two limiting parameters, namely the Knudsen number Kn and the discretization for high frequencies, on K(omega) is carefully studied for the first time. Kn is proportional to nu/(cs H) where nu is the kinematic viscosity, cs the speed of sound in the fluid and H a characteristic length scale of the porous medium. Several porous media such as the classical plane Poiseuille flow and the reconstructed media are used to show that it is only for small enough values of Kn that reliable results are obtained. Otherwise, the data depend on Kn and may even be totally unphysical. However, it should be noticed that the limiting value of Kn could not be derived in general since it depends very much on the structure of the medium. Problems occur at

  15. Dynamic permeability of porous media by the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Pazdniakou, A.; Adler, P. M.

    2011-12-01

    The main objective of our work is to determine the dynamic permeability of three dimensional porous media by means of the Lattice Boltzmann method (LBM). The Navier-Stokes equation can be numerically solved by LBM which is widely used to address various fluid dynamics problems. Space is discretized by a three-dimensional cubic lattice and time is discretized as well. The generally accepted notation for lattice Boltzmann models is DdQq where D stands for space dimension and Q for the number of discrete velocities. The present model is denoted by D3Q19. Moreover, the Two Relaxation Times variant of the Multi Relaxation Times model is implemented. Bounce back boundary conditions are used on the solid-fluid interfaces. The porous medium is spatially periodic. Reconstructed media were used; they are obtained by imposing a porosity and a correlation function characterized by a correlation length. Real samples can be obtained by MicroCT. In contrast with other previous contributions, the dynamic permeability K(omega) which is a complex number, is derived by imposing an oscillating body force of pulsation omega on the unit cell and by deriving the amplitude and the phase shift of the resulting time dependent seepage velocity. The influence of two limiting parameters, namely the Knudsen number Kn and the discretization for high frequencies, on K(omega) is carefully studied for the first time. Kn is proportional to nu/(c_s H) where nu is the kinematic viscosity, c_s the speed of sound in the fluid and H a characteristic length scale of the porous medium. Several porous media such as the classical plane Poiseuille flow and the reconstructed media are used to show that it is only for small enough values of Kn that reliable results are obtained. Otherwise, the data depend on Kn and may even be totally unphysical. However, it should be noticed that the limiting value of Kn could not be derived in general since it depends very much on the structure of the medium. Problems occur

  16. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  17. Fast maximum likelihood joint estimation of frequency and frequency rate

    NASA Astrophysics Data System (ADS)

    Abatzoglou, Theagenis J.

    1986-11-01

    A fast maximum likelihood algorithm is presented that jointly estimates the frequency and frequency rate of a sinusoid corrupted by additive Gaussian white noise. It consists of a coarse search and a fine search. First, the two-dimensional frequency-frequency rate plane is subdivided into parallelograms whose size depends on the region of convergence of Newton's method used in maximizing the log-likelihood function (LLF). The size of the parallelogram is explicitly computed and is optimal for the method used. The coarse search consists of maximizing the LLF over the vertices of the parallelograms. Then, starting at the vertex where the LLF attained its maximum, a two-dimensional Newton's method to find the absolute maximum of the LLF is implemented. This last step consists of the fine search. The rate of convergence of Newton's method is cubic, and is extremely fast. Furthermore, Newton's method will converge after two iterations when the starting point used in the method lies within 75 percent of the distances defined by the parallelogram of convergence whose center coincides with the true values of frequency and frequency rate. In this case, the rms errors for frequency and frequency rate are practically equal to the Cramer-Rao bound at all signal-to-noise ratio of equal to or greater than 15 dB. The frequency-frequency rate ambiguity function is shown to be even, and its periodicities are extracted.

  18. A simple objective method for determining a dynamic journal collection.

    PubMed

    Bastille, J D; Mankin, C J

    1980-10-01

    In order to determine the content of a journal collection responsive to both user needs and space and dollar constraints, quantitative measures of the use of a 647-title collection have been related to space and cost requirements to develop objective criteria for a dynamic collection for the Treadwell Library at the Massachusetts General Hospital, a large medical research center. Data were collected for one calendar year (1977) and stored with the elements for each title's profile in a computerized file. To account for the effect of the bulk of the journal runs on the number of uses, raw use data have been adjusted using linear shelf space required for each title to produce a factor called density of use. Titles have been ranked by raw use and by density of use with space and cost requirements for each. Data have also been analyzed for five special categories of use. Given automated means of collecting and storing data, use measures should be collected continuously. Using raw use frequency ranking to relate use to space and costs seems sensible since a decision point cutoff can be chosen in terms of the potential interlibrary loans generated. But it places new titles at risk while protecting titles with long, little used runs. Basing decisions on density of use frequency ranking seems to produce a larger yield of titles with fewer potential interlibrary loans and to identify titles with overlong runs which may be pruned or converted to microform. The method developed is simple and practical. Its design will be improved to apply to data collected in 1980 for a continuous study of journal use. The problem addressed is essentially one of inventory control. Viewed as such it makes good financial sense to measure use as part of the routine operation of the library to provide information for effective management decisions.

  19. Calculation method of reliability on combine harvester transmission belt by considering dynamic stress

    NASA Astrophysics Data System (ADS)

    Guan, Zhuohuai; Li, Liang; Wu, Chongyou

    2017-06-01

    Transmission belt is one of the most likely to fail parts of combine harvester, which affecting the machine reliability seriously. Dynamic strength occurs along with vibration during the operation and must be taken into account when calculating reliability, especially in harsh working environment like harvesting. However, the existing calculation method of reliability on combine harvester transmission belt didn’t take the dynamic strength into account. In this research, a reliability calculation method was proposed based on the dynamic analysis of transmission belt. The nonlinear dynamic equation was built using string and beam model. Through the equation, relationship between belt speed and dynamic stress was deduced. Considering dynamic stress and regarding uncertain parameters as random uncertain parameters, reliability calculation model was built. Finally, an example was presented and the above mentioned dynamic reliability calculation method was simulated to verify the theoretical analysis in this paper and tested by the Monte-Carlo method.

  20. Application of Koopman Mode Decomposition Methods in Dynamic Stall

    DTIC Science & Technology

    2014-03-11

    dynamic stall motivated by the interest in improving maneuverability and performance of rotorcraft air vehicles, 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...policy or decision, unless so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office...efforts to study dynamic stall motivated by the interest in improving maneuverability and performance of rotorcraft air vehicles, progress is needed for