Science.gov

Sample records for maximum likelihood approach

  1. Maximum likelihood.

    PubMed

    Yang, Shuying; De Angelis, Daniela

    2013-01-01

    The maximum likelihood method is a popular statistical inferential procedure widely used in many areas to obtain the estimates of the unknown parameters of a population of interest. This chapter gives a brief description of the important concepts underlying the maximum likelihood method, the definition of the key components, the basic theory of the method, and the properties of the resulting estimates. Confidence interval and likelihood ratio test are also introduced. Finally, a few examples of applications are given to illustrate how to derive maximum likelihood estimates in practice. A list of references to relevant papers and software for a further understanding of the method and its implementation is provided.

  2. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  3. Maximum-likelihood approach to strain imaging using ultrasound

    PubMed Central

    Insana, M. F.; Cook, L. T.; Bilgen, M.; Chaturvedi, P.; Zhu, Y.

    2009-01-01

    A maximum-likelihood (ML) strategy for strain estimation is presented as a framework for designing and evaluating bioelasticity imaging systems. Concepts from continuum mechanics, signal analysis, and acoustic scattering are combined to develop a mathematical model of the ultrasonic waveforms used to form strain images. The model includes three-dimensional (3-D) object motion described by affine transformations, Rayleigh scattering from random media, and 3-D system response functions. The likelihood function for these waveforms is derived to express the Fisher information matrix and variance bounds for displacement and strain estimation. The ML estimator is a generalized cross correlator for pre- and post-compression echo waveforms that is realized by waveform warping and filtering prior to cross correlation and peak detection. Experiments involving soft tissuelike media show the ML estimator approaches the Cramér–Rao error bound for small scaling deformations: at 5 MHz and 1.2% compression, the predicted lower bound for displacement errors is 4.4 µm and the measured standard deviation is 5.7 µm. PMID:10738797

  4. A maximum likelihood approach to estimating correlation functions

    SciTech Connect

    Baxter, Eric Jones; Rozo, Eduardo

    2013-12-10

    We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.

  5. A comparison of the generalized estimating equation approach with the maximum likelihood approach for repeated measurements.

    PubMed

    Park, T

    1993-09-30

    Liang and Zeger proposed an extension of generalized linear models to the analysis of longitudinal data. Their approach is closely related to quasi-likelihood methods and can handle both normal and non-normal outcome variables such as Poisson or binary outcomes. Their approach, however, has been applied mainly to non-normal outcome variables. This is probably due to the fact that there is a large class of multivariate linear models available for normal outcomes such as growth models and random-effects models. Furthermore, there are many iterative algorithms that yield maximum likelihood estimators (MLEs) of the model parameters. The multivariate linear model approach, based on maximum likelihood (ML) estimation, specifies the joint multivariate normal distribution of outcome variables while the approach of Liang and Zeger, based on the quasi-likelihood, specifies only the marginal distributions. In this paper, I compare the approach of Liang and Zeger and the ML approach for the multivariate normal outcomes. I show that the generalized estimating equation (GEE) reduces to the score equation only when the data do not have missing observations and the correlation is unstructured. In more general cases, however, the GEE estimation yields consistent estimators that may differ from the MLEs. That is, the GEE does not always reduce to the score equation even when the outcome variables are multivariate normal. I compare the small sample properties of the GEE estimators and the MLEs by means of a Monte Carlo simulation study. PMID:8248664

  6. An independent sequential maximum likelihood approach to simultaneous track-to-track association and bias removal

    NASA Astrophysics Data System (ADS)

    Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang

    2015-12-01

    In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.

  7. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  8. A maximum likelihood approach to estimating articulator positions from speech acoustics

    SciTech Connect

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  9. A maximum likelihood approach to jointly estimating seasonal and annual flood frequency distributions

    NASA Astrophysics Data System (ADS)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.

    2012-04-01

    Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.

  10. Maximum-likelihood approaches reveal signatures of positive selection in IL genes in mammals.

    PubMed

    Neves, Fabiana; Abrantes, Joana; Steinke, John W; Esteves, Pedro J

    2014-02-01

    ILs are part of the immune system and are involved in multiple biological activities. ILs have been shown to evolve under positive selection; however, little information exists regarding which codons are specifically selected. By using different codon-based maximum-likelihood (ML) approaches, signatures of positive selection in mammalian ILs were searched for. Sequences of 46 ILs were retrieved from publicly available databases of mammalian genomes to detect signatures of positive selection in individual codons. Evolutionary analyses were conducted under two ML frameworks, the HyPhy package implemented in the Data Monkey Web Server and CODEML implemented in PAML. Signatures of positive selection were found in 28 ILs: IL-1A and B; IL-2, IL-4 to IL-10, IL-12A and B; IL-14 to IL-17A and C; IL-18, IL-20 to IL-22, IL-25, IL-26, IL-27B, IL-31, IL-34, IL-36A; and G. Codons under positive selection varied between 1 and 15. No evidence of positive selection was detected in IL-13; IL-17B and F; IL-19, IL-23, IL-24, IL-27A; or IL-29. Most mammalian ILs have sites evolving under positive selection, which may be explained by the multitude of biological processes in which ILs are enrolled. The results obtained raise hypotheses concerning the ILs functions, which should be pursued by using mutagenesis and crystallographic approaches.

  11. Molecular evolution and phylogeny of the buzzatii complex (Drosophila repleta group): a maximum-likelihood approach.

    PubMed

    Rodríguez-Trelles, F; Alarcón, L; Fontdevila, A

    2000-07-01

    The buzzatii complex of the mulleri subgroup (Drosophila repleta group) consists of three clusters of species whose evolutionary relationships are poorly known. We analyzed 2,085 coding nucleotides from the xanthine dehydrogenase (XDH:) gene in the 10 available species of the complex and Drosophila mulleri and Drosophila hydei. We adopted a statistical model-fitting approach within the maximum-likelihood (ML) framework of phylogenetic inference. We first modeled the process of nucleotide substitution using a tree topology which was reasonably accurate. Then we used the most satisfactory description so attained to reconstruct the evolutionary relationships in the buzzatii complex. We found that a minimally realistic description of the substitution process of XDH: should allow six substitution types and different substitution rates for codon positions. Using this description we obtained a strongly supported, fully resolved tree which is congruent with the already-known (yet few) relationships. We also analyzed published data from three mitochondrial cytochrome oxidases (CO I, II, and III). In our analyses, these relatively short DNA sequences failed to discriminate statistically among alternative phylogenies. When the data of these three gene regions are combined with the XDH: sequences, the phylogenetic signal emerging from XDH: becomes reinforced. All four of the gene regions evolve faster in the buzzatii and martensis clusters than in the stalkeri cluster, paralleling the amount of chromosomal evolution.

  12. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations.

    PubMed

    Lee, Tai-Sung; Radak, Brian K; Pabis, Anna; York, Darrin M

    2013-01-01

    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem.

  13. C-arm perfusion imaging with a fast penalized maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    Frysch, Robert; Pfeiffer, Tim; Bannasch, Sebastian; Serowy, Steffen; Gugel, Sebastian; Skalej, Martin; Rose, Georg

    2014-03-01

    Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.

  14. TopREML: a topological restricted maximum likelihood approach to regionalize trended runoff signatures in stream networks

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-06-01

    We introduce topological restricted maximum likelihood (TopREML) as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross-validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. The ability of TopREML to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable via remote-sensing technology.

  15. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  16. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  17. A topological restricted maximum likelihood (TopREML) approach to regionalize trended runoff signatures in stream networks

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-01-01

    We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  18. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  19. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  20. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach

    PubMed Central

    2013-01-01

    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/. PMID:24683370

  1. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    SciTech Connect

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  2. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  3. Maximum-likelihood estimation of migration rates and effective population numbers in two populations using a coalescent approach.

    PubMed

    Beerli, P; Felsenstein, J

    1999-06-01

    A new method for the estimation of migration rates and effective population sizes is described. It uses a maximum-likelihood framework based on coalescence theory. The parameters are estimated by Metropolis-Hastings importance sampling. In a two-population model this method estimates four parameters: the effective population size and the immigration rate for each population relative to the mutation rate. Summarizing over loci can be done by assuming either that the mutation rate is the same for all loci or that the mutation rates are gamma distributed among loci but the same for all sites of a locus. The estimates are as good as or better than those from an optimized FST-based measure. The program is available on the World Wide Web at http://evolution.genetics. washington.edu/lamarc.html/.

  4. MAXIMUM LIKELIHOOD ESTIMATION FOR SOCIAL NETWORK DYNAMICS

    PubMed Central

    Snijders, Tom A.B.; Koskinen, Johan; Schweinberger, Michael

    2014-01-01

    A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The model for tie changes is parametric and designed for applications to social network analysis, where the network dynamics can be interpreted as being generated by choices made by the social actors represented by the nodes of the graph. An algorithm for calculating the Maximum Likelihood estimator is presented, based on data augmentation and stochastic approximation. An application to an evolving friendship network is given and a small simulation study is presented which suggests that for small data sets the Maximum Likelihood estimator is more efficient than the earlier proposed Method of Moments estimator. PMID:25419259

  5. A Maximum-Likelihood Approach to the Characterization of the Elastic Lithosphere from Gravity and Topography Data

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Olhede, S. C.

    2010-12-01

    for the strength parameters. Rarely, if ever, are lithospheric models found that satisfy both coherence and admittance to within their true error. Why don't they? Poorly characterized errors of admittance and coherence are not the only problems with this procedure. There is also the implicit annihilation of information during the construction of these statistics (coarsely sampled, sometimes squared, ratios, measures of the data as they are) themselves. Then there is the fact that we do not want to know coherence and admittance at all - we want to know properties of the lithosphere! In this presentation, we intend to abandon coherence and admittance studies for good, by proposing an entirely different method of estimating flexural rigidity, which returns it and its confidence interval, as well as a host of tests for the suitability of the assumptions made along the way, and the possible presence of correlated loads and anisotropy in the response. The crux of the method is that it employs a "Whittle" maximum-likelihood formulation that remains very grounded in the data themselves, and which is formulated in terms of variables that do have a Gaussian distribution.

  6. 230Th and 234Th as coupled tracers of particle cycling in the ocean: A maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Lei; Armstrong, Robert A.; Cochran, J. Kirk; Heilbrun, Christina

    2016-05-01

    We applied maximum likelihood estimation to measurements of Th isotopes (234,230Th) in Mediterranean Sea sediment traps that separated particles according to settling velocity. This study contains two unique aspects. First, it relies on settling velocities that were measured using sediment traps, rather than on measured particle sizes and an assumed relationship between particle size and sinking velocity. Second, because of the labor and expense involved in obtaining these data, they were obtained at only a few depths, and their analysis required constructing a new type of box-like model, which we refer to as a "two-layer" model, that we then analyzed using likelihood techniques. Likelihood techniques were developed in the 1930s by statisticians, and form the computational core of both Bayesian and non-Bayesian statistics. Their use has recently become very popular in ecology, but they are relatively unknown in geochemistry. Our model was formulated by assuming steady state and first-order reaction kinetics for thorium adsorption and desorption, and for particle aggregation, disaggregation, and remineralization. We adopted a cutoff settling velocity (49 m/d) from Armstrong et al. (2009) to separate particles into fast- and slow-sinking classes. A unique set of parameters with no dependence on prior values was obtained. Adsorption rate constants for both slow- and fast-sinking particles are slightly higher in the upper layer than in the lower layer. Slow-sinking particles have higher adsorption rate constants than fast-sinking particles. Desorption rate constants are higher in the lower layer (slow-sinking particles: 13.17 ± 1.61, fast-sinking particles: 13.96 ± 0.48) than in the upper layer (slow-sinking particles: 7.87 ± 0.60 y-1, fast-sinking particles: 1.81 ± 0.44 y-1). Aggregation rate constants were higher, 1.88 ± 0.04, in the upper layer and just 0.07 ± 0.01 y-1 in the lower layer. Disaggregation rate constants were just 0.30 ± 0.10 y-1 in the upper

  7. Maximum likelihood molecular clock comb: analytic solutions.

    PubMed

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  8. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  9. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  10. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  11. ROBUST MAXIMUM LIKELIHOOD ESTIMATION IN Q-SPACE MRI.

    PubMed

    Landman, B A; Farrell, J A D; Smith, S A; Calabresi, P A; van Zijl, P C M; Prince, J L

    2008-05-14

    Q-space imaging is an emerging diffusion weighted MR imaging technique to estimate molecular diffusion probability density functions (PDF's) without the need to assume a Gaussian distribution. We present a robust M-estimator, Q-space Estimation by Maximizing Rician Likelihood (QEMRL), for diffusion PDF's based on maximum likelihood. PDF's are modeled by constrained Gaussian mixtures. In QEMRL, robust likelihood measures mitigate the impacts of imaging artifacts. In simulation and in vivo human spinal cord, the method improves reliability of estimated PDF's and increases tissue contrast. QEMRL enables more detailed exploration of the PDF properties than prior approaches and may allow acquisitions at higher spatial resolution.

  12. CORA: Emission Line Fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  13. CORA - emission line fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  14. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  15. Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Lee, Sik-Yum

    1987-01-01

    Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)

  16. Maximum likelihood inference of reticulate evolutionary histories.

    PubMed

    Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay

    2014-11-18

    Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173

  17. PAML 4: phylogenetic analysis by maximum likelihood.

    PubMed

    Yang, Ziheng

    2007-08-01

    PAML, currently in version 4, is a package of programs for phylogenetic analyses of DNA and protein sequences using maximum likelihood (ML). The programs may be used to compare and test phylogenetic trees, but their main strengths lie in the rich repertoire of evolutionary models implemented, which can be used to estimate parameters in models of sequence evolution and to test interesting biological hypotheses. Uses of the programs include estimation of synonymous and nonsynonymous rates (d(N) and d(S)) between two protein-coding DNA sequences, inference of positive Darwinian selection through phylogenetic comparison of protein-coding genes, reconstruction of ancestral genes and proteins for molecular restoration studies of extinct life forms, combined analysis of heterogeneous data sets from multiple gene loci, and estimation of species divergence times incorporating uncertainties in fossil calibrations. This note discusses some of the major applications of the package, which includes example data sets to demonstrate their use. The package is written in ANSI C, and runs under Windows, Mac OSX, and UNIX systems. It is available at -- (http://abacus.gene.ucl.ac.uk/software/paml.html).

  18. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  19. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Lui, Kenneth W. K.; So, H. C.

    2009-12-01

    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.

  20. Maximum likelihood as a common computational framework in tomotherapy.

    PubMed

    Olivera, G H; Shepard, D M; Reckwerdt, P J; Ruchala, K; Zachman, J; Fitchard, E E; Mackie, T R

    1998-11-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. PMID:9832016

  1. The maximum likelihood dating of magnetostratigraphic sections

    NASA Astrophysics Data System (ADS)

    Man, Otakar

    2011-04-01

    In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.

  2. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  3. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  4. A maximum likelihood approach to generate hypotheses on the evolution and historical biogeography in the Lower Volga Valley regions (southwest Russia).

    PubMed

    Mavrodiev, Evgeny V; Laktionov, Alexy P; Cellinese, Nico

    2012-07-01

    The evolution of the diverse flora in the Lower Volga Valley (LVV) (southwest Russia) is complex due to the composite geomorphology and tectonic history of the Caspian Sea and adjacent areas. In the absence of phylogenetic studies and temporal information, we implemented a maximum likelihood (ML) approach and stochastic character mapping reconstruction aiming at recovering historical signals from species occurrence data. A taxon-area matrix of 13 floristic areas and 1018 extant species was constructed and analyzed with RAxML and Mesquite. Additionally, we simulated scenarios with numbers of hypothetical extinct taxa from an unknown palaeoflora that occupied the areas before the dramatic transgression and regression events that have occurred from the Pleistocene to the present day. The flora occurring strictly along the river valley and delta appear to be younger than that of adjacent steppes and desert-like regions, regardless of the chronology of transgression and regression events that led to the geomorphological formation of the LVV. This result is also supported when hypothetical extinct taxa are included in the analyses. The history of each species was inferred by using a stochastic character mapping reconstruction method as implemented in Mesquite. Individual histories appear to be independent from one another and have been shaped by repeated dispersal and extinction events. These reconstructions provide testable hypotheses for more in-depth investigations of their population structure and dynamics. PMID:22957179

  5. Weibull distribution based on maximum likelihood with interval inspection data

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.

    1985-01-01

    The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.

  6. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  7. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  8. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  9. Penalized maximum-likelihood image reconstruction for lesion detection

    NASA Astrophysics Data System (ADS)

    Qi, Jinyi; Huesman, Ronald H.

    2006-08-01

    Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-likelihood image reconstruction for this important clinical task. Compared to analytical reconstruction methods, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-likelihood image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.

  10. Effects of parameter estimation on maximum-likelihood bootstrap analysis.

    PubMed

    Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack

    2010-08-01

    Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data.

  11. Maximum likelihood estimation applied to multiepoch MEG/EEG analysis

    NASA Astrophysics Data System (ADS)

    Baryshnikov, Boris V.

    A maximum likelihood based algorithm for reducing the effects of spatially colored noise in evoked response MEG and EEG experiments is presented. The signal of interest is modeled as the low rank mean, while the noise is modeled as a Kronecker product of spatial and temporal covariance matrices. The temporal covariance is assumed known, while the spatial covariance is estimated as part of the algorithm. In contrast to prestimulus based whitening followed by principal component analysis, our algorithm does not require signal-free data for noise whitening and thus is more effective with non-stationary noise and produces better quality whitening for a given data record length. The efficacy of this approach is demonstrated using simulated and real MEG data. Next, a study in which we characterize MEG cortical response to coherent vs. incoherent motion is presented. It was found that coherent motion of the object induces not only an early sensory response around 180 ms relative to the stimulus onset but also a late field in the 250--500 ms range that has not been observed previously in similar random dot kinematogram experiments. The late field could not be resolved without signal processing using the maximum likelihood algorithm. The late activity localized to parietal areas. This is what would be expected. We believe that the late field corresponds to higher order processing related to the recognition of the moving object against the background. Finally, a maximum likelihood based dipole fitting algorithm is presented. It is suitable for dipole fitting of evoked response MEG data in the presence of spatially colored noise. The method exploits the temporal multiepoch structure of the evoked response data to estimate the spatial noise covariance matrix from the section of data being fit, eliminating the stationarity assumption implicit in prestimulus based whitening approaches. The preliminary results of the application of this algorithm to the simulated data show its

  12. Maximum likelihood dipole fitting in spatially colored noise.

    PubMed

    Baryshnikov, B V; Van Veen, B D; Wakai, R T

    2004-11-30

    We evaluated a maximum likelihood dipole-fitting algorithm for somatosensory evoked field (SEF) MEG data in the presence of spatially colored noise. The method exploits the temporal multiepoch structure of the evoked response data to estimate the spatial noise covariance matrix from the section of data being fit, which eliminates the stationarity assumption implicit in prestimulus based whitening approaches. The performance of the method, including its effectiveness in comparison to other localization techniques (dipole fitting, LCMV and MUSIC) was evaluated using the bootstrap technique. Synthetic data results demonstrated robustness of the algorithm in the presence of relatively high levels of noise when traditional dipole fitting algorithms fail. Application of the algorithm to adult somatosensory MEG data showed that while it is not advantageous for high SNR data, it definitely provides improved performance (measured by the spread of localizations) as the data sample size decreases.

  13. Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.

    PubMed

    Chor, Benny; Snir, Sagi

    2004-12-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.

  14. Assessing allelic dropout and genotype reliability using maximum likelihood.

    PubMed Central

    Miller, Craig R; Joyce, Paul; Waits, Lisette P

    2002-01-01

    A growing number of population genetic studies utilize nuclear DNA microsatellite data from museum specimens and noninvasive sources. Genotyping errors are elevated in these low quantity DNA sources, potentially compromising the power and accuracy of the data. The most conservative method for addressing this problem is effective, but requires extensive replication of individual genotypes. In search of a more efficient method, we developed a maximum-likelihood approach that minimizes errors by estimating genotype reliability and strategically directing replication at loci most likely to harbor errors. The model assumes that false and contaminant alleles can be removed from the dataset and that the allelic dropout rate is even across loci. Simulations demonstrate that the proposed method marks a vast improvement in efficiency while maintaining accuracy. When allelic dropout rates are low (0-30%), the reduction in the number of PCR replicates is typically 40-50%. The model is robust to moderate violations of the even dropout rate assumption. For datasets that contain false and contaminant alleles, a replication strategy is proposed. Our current model addresses only allelic dropout, the most prevalent source of genotyping error. However, the developed likelihood framework can incorporate additional error-generating processes as they become more clearly understood. PMID:11805071

  15. Maximum likelihood based classification of electron tomographic data.

    PubMed

    Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan

    2011-01-01

    Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.

  16. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  17. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  18. Targeted Maximum Likelihood Based Causal Inference: Part I

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  19. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  20. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  1. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  2. Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution

    SciTech Connect

    Bowman, Kimiko o

    2007-01-01

    The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.

  3. Use of historical information in a maximum-likelihood framework

    USGS Publications Warehouse

    Cohn, T.A.; Stedinger, J.R.

    1987-01-01

    This paper discusses flood-quantile estimators which can employ historical and paleoflood information, both when the magnitudes of historical flood peaks are known, and when only threshold-exceedance information is available. Maximum likelihood, quasi-maximum likelihood and curve fitting methods for simultaneous estimation of 1, 2 and 3 unknown parameters are examined. The information contained in a 100 yr record of historical observations, during which the flood perception threshold was near the 10 yr flood level (i.e., on average, one flood in ten is above the threshold and hence is recorded), is equivalent to roughly 43, 64 and 78 years of systematic record in terms of the improvement of the precision of 100 yr flood estimators when estimating 1, 2 and 3 parameters, respectively. With the perception threshold at the 100 yr flood level, the historical data was worth 13, 20 and 46 years of systematic data when estimating 1, 2 and 3 parameters, respectively. ?? 1987.

  4. Maximum likelihood synchronizer for binary overlapping PCM/NRZ signals.

    NASA Technical Reports Server (NTRS)

    Wang, C. D.; Noack, T. L.; Morris, J. F.

    1973-01-01

    A maximum likelihood parameter estimation technique for the self bit synchronization problem is investigated. The input sequence to the bit synchronizer is a sequence of binary overlapping PCM/NRZ signal in the presence of white Gaussian noise with zero mean and known variance. The resulting synchronizer consists of matched filters, a transition device and a weighting function. Finally, the performance is examined by Monte Carlo simulations.

  5. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  6. Maximum-likelihood registration of range images with missing data.

    PubMed

    Sharp, Gregory C; Lee, Sang W; Wehe, David K

    2008-01-01

    Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329

  7. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  8. STECKMAP: STEllar Content and Kinematics via Maximum A Posteriori likelihood

    NASA Astrophysics Data System (ADS)

    Ocvirk, P.

    2011-08-01

    STECKMAP stands for STEllar Content and Kinematics via Maximum A Posteriori likelihood. It is a tool for interpreting galaxy spectra in terms of their stellar populations through the derivation of their star formation history, age-metallicity relation, kinematics and extinction. The observed spectrum is projected onto a temporal sequence of models of single stellar populations, so as to determine a linear combination of these models that best fits the observed spectrum. The weights of the various components of this linear combination indicate the stellar content of the population. This procedure is regularized using various penalizing functions. The principles of the method are detailed in Ocvirk et al. 2006.

  9. Maximum likelihood tuning of a vehicle motion filter

    NASA Technical Reports Server (NTRS)

    Trankle, Thomas L.; Rabin, Uri H.

    1990-01-01

    This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.

  10. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  11. The relative performance of targeted maximum likelihood estimators.

    PubMed

    Porter, Kristin E; Gruber, Susan; van der Laan, Mark J; Sekhon, Jasjeet S

    2011-01-01

    There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges. PMID:21931570

  12. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    NASA Astrophysics Data System (ADS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-10-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.

  13. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    SciTech Connect

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  14. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  15. Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Wells, W. R.

    1979-01-01

    A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data

  16. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  17. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    SciTech Connect

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  18. Maximum Likelihood Estimation of GEVD: Applications in Bioinformatics.

    PubMed

    Thomas, Minta; Daemen, Anneleen; De Moor, Bart

    2014-01-01

    We propose a method, maximum likelihood estimation of generalized eigenvalue decomposition (MLGEVD) that employs a well known technique relying on the generalization of singular value decomposition (SVD). The main aim of the work is to show the tight equivalence between MLGEVD and generalized ridge regression. This relationship reveals an important mathematical property of GEVD in which the second argument act as prior information in the model. Thus we show that MLGEVD allows the incorporation of external knowledge about the quantities of interest into the estimation problem. We illustrate the importance of prior knowledge in clinical decision making/identifying differentially expressed genes with case studies for which microarray data sets with corresponding clinical/literature information are available. On all of these three case studies, MLGEVD outperformed GEVD on prediction in terms of test area under the ROC curve (test AUC). MLGEVD results in significantly improved diagnosis, prognosis and prediction of therapy response.

  19. Maximum likelihood analysis of low energy CDMS II germanium data

    NASA Astrophysics Data System (ADS)

    Agnese, R.; Anderson, A. J.; Balakishiyeva, D.; Basu Thakur, R.; Bauer, D. A.; Billard, J.; Borgland, A.; Bowles, M. A.; Brandt, D.; Brink, P. L.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Cerdeno, D. G.; Chagani, H.; Chen, Y.; Cooley, J.; Cornell, B.; Crewdson, C. H.; Cushman, P.; Daal, M.; Di Stefano, P. C. F.; Doughty, T.; Esteban, L.; Fallows, S.; Figueroa-Feliciano, E.; Fritts, M.; Godfrey, G. L.; Golwala, S. R.; Graham, M.; Hall, J.; Harris, H. R.; Hertel, S. A.; Hofer, T.; Holmgren, D.; Hsu, L.; Huber, M. E.; Jastram, A.; Kamaev, O.; Kara, B.; Kelsey, M. H.; Kennedy, A.; Kiveni, M.; Koch, K.; Leder, A.; Loer, B.; Lopez Asamar, E.; Mahapatra, R.; Mandic, V.; Martinez, C.; McCarthy, K. A.; Mirabolfathi, N.; Moffatt, R. A.; Moore, D. C.; Nelson, R. H.; Oser, S. M.; Page, K.; Page, W. A.; Partridge, R.; Pepin, M.; Phipps, A.; Prasad, K.; Pyle, M.; Qiu, H.; Rau, W.; Redl, P.; Reisetter, A.; Ricci, Y.; Rogers, H. E.; Saab, T.; Sadoulet, B.; Sander, J.; Schneck, K.; Schnee, R. W.; Scorza, S.; Serfass, B.; Shank, B.; Speller, D.; Upadhyayula, S.; Villano, A. N.; Welliver, B.; Wright, D. H.; Yellin, S.; Yen, J. J.; Young, B. A.; Zhang, J.; SuperCDMS Collaboration

    2015-03-01

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using geant4 to simulate the surface-event background from 210Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  20. Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes

    NASA Astrophysics Data System (ADS)

    Boyer, E.; Petitdidier, M.; Larzabal, P.

    2004-11-01

    This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.

  1. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  2. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood.

    PubMed

    Guindon, Stéphane; Gascuel, Olivier

    2003-10-01

    The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/.

  3. Maximum likelihood resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, J.; Jenkins, C.

    2005-12-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty

  4. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  5. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    DOE PAGESBeta

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less

  6. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L.; DuFrain, R.J.

    1986-03-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  7. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  8. Maximum likelihood random galaxy catalogues and luminosity function estimation

    NASA Astrophysics Data System (ADS)

    Cole, Shaun

    2011-09-01

    We present a new algorithm to generate a random (unclustered) version of an magnitude limited observational galaxy redshift catalogue. It takes into account both galaxy evolution and the perturbing effects of large-scale structure. The key to the algorithm is a maximum likelihood (ML) method for jointly estimating both the luminosity function (LF) and the overdensity as a function of redshift. The random catalogue algorithm then works by cloning each galaxy in the original catalogue, with the number of clones determined by the ML solution. Each of these cloned galaxies is then assigned a random redshift uniformly distributed over the accessible survey volume, taking account of the survey magnitude limit(s) and, optionally, both luminosity and number density evolution. The resulting random catalogues, which can be employed in traditional estimates of galaxy clustering, make fuller use of the information available in the original catalogue and hence are superior to simply fitting a functional form to the observed redshift distribution. They are particularly well suited to studies of the dependence of galaxy clustering on galaxy properties as each galaxy in the random catalogue has the same list of attributes as measured for the galaxies in the genuine catalogue. The derivation of the joint overdensity and LF estimator reveals the limit in which the ML estimate reduces to the standard 1/Vmax LF estimate, namely when one makes the prior assumption that the are no fluctuations in the radial overdensity. The new ML estimator can be viewed as a generalization of the 1/Vmax estimate in which Vmax is replaced by a density corrected Vdc, max.

  9. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  10. Maximum-likelihood methods in cryo-EM. Part II: application to experimental data

    PubMed Central

    Scheres, Sjors H.W.

    2010-01-01

    With the advent of computationally feasible approaches to maximum likelihood image processing for cryo-electron microscopy, these methods have proven particularly useful in the classification of structurally heterogeneous single-particle data. A growing number of experimental studies have applied these algorithms to study macromolecular complexes with a wide range of structural variability, including non-stoichiometric complex formation, large conformational changes and combinations of both. This chapter aims to share the practical experience that has been gained from the application of these novel approaches. Current insights on how to prepare the data and how to perform two- or three-dimensional classifications are discussed together with aspects related to high-performance computing. Thereby, this chapter will hopefully be of practical use for those microscopists wanting to apply maximum likelihood methods in their own investigations. PMID:20888966

  11. Maximum-likelihood estimation of gene location by linkage disequilibrium

    SciTech Connect

    Hill, W.G. ); Weir, B.S. )

    1994-04-01

    Linkage disequilibrium, D, between a polymorphic disease and mapped markers can, in principle, be used to help find the map position of the disease gene. Likelihoods are therefore derived for the value of D conditional on the observed number of haplotypes in the sample and on the population parameter Nc, where N is the effective population size and c the recombination fraction between the disease and marker loci. The likelihood is computed explicitly for the case of two loci with heterozygote superiority and, more generally, by computer simulations assuming a steady state of constant population size and selective pressures or neutrality. It is found that the likelihood is, in general, not very dependent on the degree of selection at the loci and is very flat. This suggests that precise information on map position will not be obtained from estimates of linkage disequilibrium. 15 refs., 5 figs., 21 tabs.

  12. Maximum-likelihood estimation of admixture proportions from genetic data.

    PubMed Central

    Wang, Jinliang

    2003-01-01

    For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794

  13. Maximum-likelihood and other processors for incoherent and coherent matched-field localization.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2012-10-01

    This paper develops a series of maximum-likelihood processors for matched-field source localization given various states of information regarding the frequency and time variation of source amplitude and phase, and compares these with existing approaches to coherent processing with incomplete source knowledge. The comparison involves elucidating each processor's approach to source spectral information within a unifying formulation, which provides a conceptual framework for classifying and comparing processors and explaining their relative performance, as quantified in a numerical study. The maximum-likelihood processors represent optimal estimators given the assumption of Gaussian noise, and are based on analytically maximizing the corresponding likelihood function over explicit unknown source spectral parameters. Cases considered include knowledge of the relative variation in source amplitude over time and/or frequency (e.g., a flat spectrum), and tracking the relative phase variation over time, as well as incoherent and coherent processing. Other approaches considered include the conventional (Bartlett) processor, cross-frequency incoherent processor, pair-wise processor, and coherent normalized processor. Processor performance is quantified as the probability of correct localization from Monte Carlo appraisal over a large number of random realizations of noise, source location, and environmental parameters. Processors are compared as a function of signal-to-noise ratio, number of frequencies, and number of sensors. PMID:23039424

  14. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  16. Spatially explicit maximum likelihood methods for capture-recapture studies.

    PubMed

    Borchers, D L; Efford, M G

    2008-06-01

    Live-trapping capture-recapture studies of animal populations with fixed trap locations inevitably have a spatial component: animals close to traps are more likely to be caught than those far away. This is not addressed in conventional closed-population estimates of abundance and without the spatial component, rigorous estimates of density cannot be obtained. We propose new, flexible capture-recapture models that use the capture locations to estimate animal locations and spatially referenced capture probability. The models are likelihood-based and hence allow use of Akaike's information criterion or other likelihood-based methods of model selection. Density is an explicit parameter, and the evaluation of its dependence on spatial or temporal covariates is therefore straightforward. Additional (nonspatial) variation in capture probability may be modeled as in conventional capture-recapture. The method is tested by simulation, using a model in which capture probability depends only on location relative to traps. Point estimators are found to be unbiased and standard error estimators almost unbiased. The method is used to estimate the density of Red-eyed Vireos (Vireo olivaceus) from mist-netting data from the Patuxent Research Refuge, Maryland, U.S.A. Estimates agree well with those from an existing spatially explicit method based on inverse prediction. A variety of additional spatially explicit models are fitted; these include models with temporal stratification, behavioral response, and heterogeneous animal home ranges. PMID:17970815

  17. A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.

    PubMed

    Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming

    2008-01-01

    This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system. PMID:19162641

  18. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  19. Digital combining-weight estimation for broadband sources using maximum-likelihood estimates

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.; Vilnrotter, V. A.

    1994-01-01

    An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.

  20. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  1. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

    SciTech Connect

    Bowman, Kimiko o; Shenton, LR

    2006-01-01

    The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

  2. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  3. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  4. Maximum likelihood positioning and energy correction for scintillation detectors.

    PubMed

    Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-21

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30 × 30 scintillator pixel array with an 8 × 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time. PMID:26836394

  5. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  6. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756

  7. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  8. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  9. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    1991-01-01

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  10. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  11. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  12. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  14. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  15. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  16. A likelihood approach to calculating risk support intervals

    SciTech Connect

    Leal, S.M.; Ott, J. )

    1994-05-01

    Genetic risks are usually computed under the assumption that genetic parameters, such as the recombination fraction, are known without error. Uncertainty in the estimates of these parameters must translate into uncertainty regarding the risk. To allow for uncertainties in parameter values, one may employ Bayesian techniques or, in a maximum-likelihood framework, construct a support interval (SI) for the risk. Here the authors have implemented the latter approach. The SI for the risk is based on the SIs of parameters involved in the pedigree likelihood. As an empirical example, the SI for the risk was calculated for probands who are members of chronic spinal muscular atrophy kindreds. In order to evaluate the accuracy of a risk in genetic counseling situations, the authors advocate that, in addition to a point estimate, an SI for the risk should be calculated. 16 refs., 1 fig., 1 tab.

  17. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  18. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGESBeta

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  19. 2-D impulse noise suppression by recursive gaussian maximum likelihood estimation.

    PubMed

    Chen, Yang; Yang, Jian; Shu, Huazhong; Shi, Luyao; Wu, Jiasong; Luo, Limin; Coatrieux, Jean-Louis; Toumoulin, Christine

    2014-01-01

    An effective approach termed Recursive Gaussian Maximum Likelihood Estimation (RGMLE) is developed in this paper to suppress 2-D impulse noise. And two algorithms termed RGMLE-C and RGMLE-CS are derived by using spatially-adaptive variances, which are respectively estimated based on certainty and joint certainty & similarity information. To give reliable implementation of RGMLE-C and RGMLE-CS algorithms, a novel recursion stopping strategy is proposed by evaluating the estimation error of uncorrupted pixels. Numerical experiments on different noise densities show that the proposed two algorithms can lead to significantly better results than some typical median type filters. Efficient implementation is also realized via GPU (Graphic Processing Unit)-based parallelization techniques.

  20. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  1. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  2. Evaluations of Bayesian and maximum likelihood methods in PK models with below-quantification-limit data.

    PubMed

    Yang, Shuying; Roger, James

    2010-01-01

    Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non-linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one-compartment first-order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared.

  3. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    PubMed Central

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed. PMID:24782791

  4. Long branch effects distort maximum likelihood phylogenies in simulations despite selection of the correct model.

    PubMed

    Kück, Patrick; Mayer, Christoph; Wägele, Johann-Wolfgang; Misof, Bernhard

    2012-01-01

    The aim of our study was to test the robustness and efficiency of maximum likelihood with respect to different long branch effects on multiple-taxon trees. We simulated data of different alignment lengths under two different 11-taxon trees and a broad range of different branch length conditions. The data were analyzed with the true model parameters as well as with estimated and incorrect assumptions about among-site rate variation. If length differences between connected branches strongly increase, tree inference with the correct likelihood model assumptions can fail. We found that incorporating invariant sites together with Γ distributed site rates in the tree reconstruction (Γ+I) increases the robustness of maximum likelihood in comparison with models using only Γ. The results show that for some topologies and branch lengths the reconstruction success of maximum likelihood under the correct model is still low for alignments with a length of 100,000 base positions. Altogether, the high confidence that is put in maximum likelihood trees is not always justified under certain tree shapes even if alignment lengths reach 100,000 base positions.

  5. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  6. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  7. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    SciTech Connect

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  8. MADmap: A MASSIVELY PARALLEL MAXIMUM LIKELIHOOD COSMIC MICROWAVE BACKGROUND MAP-MAKER

    SciTech Connect

    Cantalupo, C. M.; Borrill, J. D.; Kisner, T. S.; Jaffe, A. H.; Stompor, R. E-mail: jdborrill@lbl.gov E-mail: a.jaffe@imperial.ac.uk

    2010-03-01

    MADmap is a software application used to produce maximum likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by cosmic microwave background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne, and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(10{sup 11}) time samples, O(10{sup 8}) pixels, and O(10{sup 4}) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms, and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analyzing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  9. Necessary conditions for a maximum likelihood estimate to become asymptotically unbiased and attain the Cramer-Rao lower bound. Part I. General approach with an application to time-delay and Doppler shift estimation.

    PubMed

    Naftali, E; Makris, N C

    2001-10-01

    Analytic expressions for the first order bias and second order covariance of a general maximum likelihood estimate (MLE) are presented. These expressions are used to determine general analytic conditions on sample size, or signal-to-noise ratio (SNR), that are necessary for a MLE to become asymptotically unbiased and attain minimum variance as expressed by the Cramer-Rao lower bound (CRLB). The expressions are then evaluated for multivariate Gaussian data. The results can be used to determine asymptotic biases. variances, and conditions for estimator optimality in a wide range of inverse problems encountered in ocean acoustics and many other disciplines. The results are then applied to rigorously determine conditions on SNR necessary for the MLE to become unbiased and attain minimum variance in the classical active sonar and radar time-delay and Doppler-shift estimation problems. The time-delay MLE is the time lag at the peak value of a matched filter output. It is shown that the matched filter estimate attains the CRLB for the signal's position when the SNR is much larger than the kurtosis of the expected signal's energy spectrum. The Doppler-shift MLE exhibits dual behavior for narrow band analytic signals. In a companion paper, the general theory presented here is applied to the problem of estimating the range and depth of an acoustic source submerged in an ocean waveguide.

  10. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  11. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  12. Residual Diagrams Based on a Remarkably Simple Result Concerning the Variances of Maximum Likelihood Estimators.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    2002-01-01

    Presents a simple result concerning variances of maximum likelihood (ML) estimators. The result allows for construction of residual diagrams to evaluate whether ML estimators derived from independent samples can be assumed to be equal apart from random errors. Applies this result to the polytomous Rasch model. (SLD)

  13. A New Maximum Likelihood Estimator for the Population Squared Multiple Correlation.

    ERIC Educational Resources Information Center

    Alf, Edward F., Jr.; Graf, Richard G.

    2002-01-01

    Developed a new estimator for the population squared multiple correlation using maximum likelihood estimation. Data from 72 air control school graduates demonstrate that the new estimator has greater accuracy than other estimators with values that fall within the parameter space. (SLD)

  14. Maximum Likelihood Estimation of Factor Structures of Anxiety Measures: A Multiple Group Comparison.

    ERIC Educational Resources Information Center

    Kameoka, Velma A.; And Others

    Confirmatory maximum likelihood estimation of measurement models was used to evaluate the construct generality of self-report measures of anxiety across male and female samples. These measures included Spielberger's State-Trait Anxiety Inventory, Taylor's Manifest Anxiety Scale, and two forms of Endler, Hunt and Rosenstein's S-R Inventory of…

  15. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  16. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  17. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  18. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  19. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  20. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  1. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  2. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  3. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  4. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases.

  5. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  6. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  7. Evaluating treatment effectiveness under model misspecification: A comparison of targeted maximum likelihood estimation with bias-corrected matching

    PubMed Central

    Gruber, Susan; Radice, Rosalba; Grieve, Richard; Sekhon, Jasjeet S

    2014-01-01

    Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maximum likelihood estimation is a double-robust method designed to reduce bias in the estimate of the parameter of interest. Bias-corrected matching reduces bias due to covariate imbalance between matched pairs by using regression predictions. We illustrate the methods in an evaluation of different types of hip prosthesis on the health-related quality of life of patients with osteoarthritis. We undertake a simulation study, grounded in the case study, to compare the relative bias, efficiency and confidence interval coverage of the methods. We consider data generating processes with non-linear functional form relationships, normal and non-normal endpoints. We find that across the circumstances considered, bias-corrected matching generally reported less bias, but higher variance than targeted maximum likelihood estimation. When either targeted maximum likelihood estimation or bias-corrected matching incorporated machine learning, bias was much reduced, compared to using misspecified parametric models. PMID:24525488

  8. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  9. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  10. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  11. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents.

    PubMed

    Stepanyuk, Andrey; Borisyuk, Anya; Belan, Pavel

    2014-01-01

    Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter, and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment. PMID:25324721

  12. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    PubMed Central

    Stepanyuk, Andrey; Borisyuk, Anya; Belan, Pavel

    2014-01-01

    Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter, and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment. PMID:25324721

  13. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  14. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  15. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights.

    PubMed

    Deledalle, Charles-Alban; Denis, Loïc; Tupin, Florence

    2009-12-01

    Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades , which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.

  16. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249

  17. A statistical technique for processing radio interferometer data. [using maximum likelihood algorithm

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. D.

    1975-01-01

    The output of a radio interferometer is the Fourier transform of the object under investigation. Due to the limited coverage of the Fourier plane, the reconstruction of the image of the source is blurred by the beam of the synthesized array. A maximum-likelihood processing technique is described which uses the statistical properties of the received noise-like signals. This technique has been used extensively in the processing of large-aperture seismic arrays. This inversion method results in a synthesized beam that is more uniform, has lower sidelobes, and higher resolution than the normal Fourier transform methods. The maximum-likelihood method algorithm was applied successfully to very long baseline and short baseline interferometric data.

  18. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  19. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  20. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  1. Introducing robustness to maximum-likelihood refinement of electron-microsopy data

    SciTech Connect

    Scheres, Sjors H. W. Carazo, José-María

    2009-07-01

    An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy data is presented that is based on finite mixtures of multivariate t-distributions. Compared with the conventionally employed Gaussian mixture model, the t-distribution provides robustness against outliers in the data. An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy images is presented that is based on fitting mixtures of multivariate t-distributions. The novel algorithm has intrinsic characteristics for providing robustness against atypical observations in the data, which is illustrated using an experimental test set with artificially generated outliers. Tests on experimental data revealed only minor differences in two-dimensional classifications, while three-dimensional classification with the new algorithm gave stronger elongation factor G density in the corresponding class of a structurally heterogeneous ribosome data set than the conventional algorithm for Gaussian mixtures.

  2. The use of prior probabilities in maximum likelihood classification of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.

    1980-01-01

    Possibilities for the improvement of classification accuracies by the use of prior information about the expected distribution of classes in the maximum likelihood classification of remote sensing data are examined. The modification of the maximum likelihood decision rule to take into account one or several sets of probabilities for the occurrence of classes which probabilities are based on independent knowledge of the area surveyed is demonstrated. It is then shown that the use of prior probabilities is sufficiently versatile so as to allow the prior weighting of output classes based on their anticipated sizes as well as the merging of continuously varying measurements with discrete collateral information data sets and the construction of time-sequential classification systems in which an earlier classification modifies the outcome of a latter one.

  3. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks.

    PubMed

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-12-07

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks.

  4. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    PubMed Central

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  5. A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations

    PubMed Central

    Anderson, Amy D.; Weir, Bruce S.

    2007-01-01

    A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212

  6. PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference

    PubMed Central

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-01-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534

  7. PHYML Online--a web server for fast maximum likelihood-based phylogenetic inference.

    PubMed

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-07-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at http://atgc.lirmm.fr/phyml.

  8. Maximum-likelihood Estimation of Planetary Lithospheric Rigidity from Gravity and Topography

    NASA Astrophysics Data System (ADS)

    Lewis, K. W.; Eggers, G. L.; Simons, F. J.; Olhede, S. C.

    2014-12-01

    Gravity and surface topography remain among the best available tools with which to study the lithospheric structure of planetary bodies. Numerous techniques have been developed to quantify the relationship between these fields in both the spatial and spectral domains, to constrain geophysical parameters of interest. Simons and Olhede (2013) describe a new technique based on maximum-likelihood estimation of lithospheric parameters including flexural rigidity, subsurface-surface loading ratio, and the correlation of these loads. We report on the first applications of this technique to planetary bodies including Venus, Mars, and the Earth. We compare results using the maximum-likelihood technique to previous studies using admittance and coherence-based techniques. While various methods of evaluating the relationship of gravity and topography fields have distinct advantages, we demonstrate the specific benefits of the Simons and Olhede technique, which yields unbiased, minimum variance estimates of parameters, together with their covariance. Given the unavoidable problems of incompletely sensed gravity fields, spectral artifacts of data interpolation, downward continuation, and spatial localization, we prescribe a recipe for application of this method to real-world data sets. In the specific case of Venus, we discuss the results of global mapped inversion of an isotropic Matérn covariance model of its topography. We interpret and identify, via statistical testing, regions that require abandoning the null-hypothesis of isotropic Gaussianity, an assumption of the maximum-likelihood technique.

  9. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  10. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  11. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  12. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  13. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.

    PubMed

    Meyer, Karin

    2016-08-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  15. Bootstrap, Bayesian probability and maximum likelihood mapping: exploring new tools for comparative genome analyses

    PubMed Central

    Zhaxybayeva, Olga; Gogarten, J Peter

    2002-01-01

    Background Horizontal gene transfer (HGT) played an important role in shaping microbial genomes. In addition to genes under sporadic selection, HGT also affects housekeeping genes and those involved in information processing, even ribosomal RNA encoding genes. Here we describe tools that provide an assessment and graphic illustration of the mosaic nature of microbial genomes. Results We adapted the Maximum Likelihood (ML) mapping to the analyses of all detected quartets of orthologous genes found in four genomes. We have automated the assembly and analyses of these quartets of orthologs given the selection of four genomes. We compared the ML-mapping approach to more rigorous Bayesian probability and Bootstrap mapping techniques. The latter two approaches appear to be more conservative than the ML-mapping approach, but qualitatively all three approaches give equivalent results. All three tools were tested on mitochondrial genomes, which presumably were inherited as a single linkage group. Conclusions In some instances of interphylum relationships we find nearly equal numbers of quartets strongly supporting the three possible topologies. In contrast, our analyses of genome quartets containing the cyanobacterium Synechocystis sp. indicate that a large part of the cyanobacterial genome is related to that of low GC Gram positives. Other groups that had been suggested as sister groups to the cyanobacteria contain many fewer genes that group with the Synechocystis orthologs. Interdomain comparisons of genome quartets containing the archaeon Halobacterium sp. revealed that Halobacterium sp. shares more genes with Bacteria that live in the same environment than with Bacteria that are more closely related based on rRNA phylogeny . Many of these genes encode proteins involved in substrate transport and metabolism and in information storage and processing. The performed analyses demonstrate that relationships among prokaryotes cannot be accurately depicted by or inferred from

  16. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  17. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  18. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities. PMID:18249716

  19. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  20. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  1. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  2. An inconsistency in the standard maximum likelihood estimation of bulk flows

    SciTech Connect

    Nusser, Adi

    2014-11-01

    Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.

  3. Kernel maximum likelihood scaled locally linear embedding for night vision images

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    This paper proposes a robust method to analyze night vision data. A new kernel manifold algorithm is designed to match an ideal distribution with a complex one in natural data. First, an outlier-probability based on similarity metric is derived by solving the maximum likelihood in kernel space, which is corresponding with classification property for considering the statistical information on manifold. Then a robust nonlinear mapping is completed by scaling the embedding process of kernel LLE with the outlier-probability. In the simulations of artificial manifolds, real low-light-level (LLL) and infrared image sets, the proposed method show remarkable performances in dimension reduction and classification.

  4. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  5. Adaptive Partial Response Maximum Likelihood Detection with Tilt Estimation Using Sync Pattern

    NASA Astrophysics Data System (ADS)

    Lee, Kyusuk; Lee, Joohyun; Lee, Jaejin

    2006-02-01

    We propose an improved detection method that concurrently adjusts the coefficients of equalizer and reference branch values in Viterbi detector. For the estimation of asymmetric channel characteristics, we exploit sync patterns in each data frame. Because of using the read-only memory (ROM) table to renew the coefficients of equalizer and reference values of branches, the complexity of the hardware is reduced. The proposed partial response maximum likelihood (PRML) detector has been designed and verified by VerilogHDL and synthesized by Synopsys Design Compiler with Hynix 0.35 μm standard cell library.

  6. Method and apparatus for implementing a traceback maximum-likelihood decoder in a hypercube network

    NASA Technical Reports Server (NTRS)

    Pollara-Bozzola, Fabrizio (Inventor)

    1989-01-01

    A method and a structure to implement maximum-likelihood decoding of convolutional codes on a network of microprocessors interconnected as an n-dimensional cube (hypercube). By proper reordering of states in the decoder, only communication between adjacent processors is required. Communication time is limited to that required for communication only of the accumulated metrics and not the survivor parameters of a Viterbi decoding algorithm. The survivor parameters are stored at a local processor's memory and a trace-back method is employed to ascertain the decoding result. Faster and more efficient operation is enabled, and decoding of large constraint length codes is feasible using standard VLSI technology.

  7. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  8. Maximum likelihood method for fitting the Fundamental Plane of the 6dF Galaxy Survey

    NASA Astrophysics Data System (ADS)

    Magoulas, C.; Colless, M.; Jones, D.; Springob, C.; Mould, J.

    2010-04-01

    We have used over 10,000 early-type galaxies from the 6dF Galaxy Survey (6dFGS) to construct the Fundamental Plane across the optical and near-infrared passbands. We demonstrate that a maximum likelihood fit to a multivariate Gaussian model for the distribution of galaxies in size, surface brightness and velocity dispersion can properly account for selection effects, censoring and observational errors, leading to precise and unbiased parameters for the Fundamental Plane and its intrinsic scatter. This method allows an accurate and robust determination of the dependencies of the Fundamental Plane on variations in the stellar populations and environment of early-type galaxies.

  9. Maximum likelihood analysis of rare binary traits under different modes of inheritance.

    PubMed

    Thaller, G; Dempfle, L; Hoeschele, I

    1996-08-01

    Maximum likelihood methodology was applied to determine the mode of inheritance of rare binary traits with data structures typical for swine populations. The genetic models considered included a monogenic, a digenic, a polygenic, and three mixed polygenic and major gene models. The main emphasis was on the detection of major genes acting on a polygenic background. Deterministic algorithms were employed to integrate and maximize likelihoods. A simulation study was conducted to evaluate model selection and parameter estimation. Three designs were simulated that differed in the number of sires/number of dams within sires (10/10, 30/30, 100/30). Major gene effects of at least one SD of the liability were detected with satisfactory power under the mixed model of inheritance, except for the smallest design. Parameter estimates were empirically unbiased with acceptable standard errors, except for the smallest design, and allowed to distinguish clearly between the genetic models. Distributions of the likelihood ratio statistic were evaluated empirically, because asymptotic theory did not hold. For each simulation model, the Average Information Criterion was computed for all models of analysis. The model with the smallest value was chosen as the best model and was equal to the true model in almost every case studied.

  10. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  11. New method to compute Rcomplete enables maximum likelihood refinement for small datasets

    PubMed Central

    Luebben, Jens; Gruene, Tim

    2015-01-01

    The crystallographic reliability index Rcomplete is based on a method proposed more than two decades ago. Because its calculation is computationally expensive its use did not spread into the crystallographic community in favor of the cross-validation method known as Rfree. The importance of Rfree has grown beyond a pure validation tool. However, its application requires a sufficiently large dataset. In this work we assess the reliability of Rcomplete and we compare it with k-fold cross-validation, bootstrapping, and jackknifing. As opposed to proper cross-validation as realized with Rfree, Rcomplete relies on a method of reducing bias from the structural model. We compare two different methods reducing model bias and question the widely spread notion that random parameter shifts are required for this purpose. We show that Rcomplete has as little statistical bias as Rfree with the benefit of a much smaller variance. Because the calculation of Rcomplete is based on the entire dataset instead of a small subset, it allows the estimation of maximum likelihood parameters even for small datasets. Rcomplete enables maximum likelihood-based refinement to be extended to virtually all areas of crystallographic structure determination including high-pressure studies, neutron diffraction studies, and datasets from free electron lasers. PMID:26150515

  12. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    SciTech Connect

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.

  13. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  14. Likelihood approaches for the invariant density ratio model with biased-sampling data

    PubMed Central

    Shen, Yu; Ning, Jing; Qin, Jing

    2012-01-01

    The full likelihood approach in statistical analysis is regarded as the most efficient means for estimation and inference. For complex length-biased failure time data, computational algorithms and theoretical properties are not readily available, especially when a likelihood function involves infinite-dimensional parameters. Relying on the invariance property of length-biased failure time data under the semiparametric density ratio model, we present two likelihood approaches for the estimation and assessment of the difference between two survival distributions. The most efficient maximum likelihood estimators are obtained by the em algorithm and profile likelihood. We also provide a simple numerical method for estimation and inference based on conditional likelihood, which can be generalized to k-arm settings. Unlike conventional survival data, the mean of the population failure times can be consistently estimated given right-censored length-biased data under mild regularity conditions. To check the semiparametric density ratio model assumption, we use a test statistic based on the area between two survival distributions. Simulation studies confirm that the full likelihood estimators are more efficient than the conditional likelihood estimators. We analyse an epidemiological study to illustrate the proposed methods. PMID:23843663

  15. A dual formulation of a penalized maximum likelihood x-ray CT reconstruction problem

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Taguchi, Katsuyuki; Gullberg, Grant T.; Tsui, Benjamin M. W.

    2009-02-01

    This work studies the dual formulation of a penalized maximum likelihood reconstruction problem in x-ray CT. The primal objective function is a Poisson log-likelihood combined with a weighted cross-entropy penalty term. The dual formulation of the primal optimization problem is then derived and the optimization procedure outlined. The dual formulation better exploits the structure of the problem, which translates to faster convergence of iterative reconstruction algorithms. A gradient descent algorithm is implemented for solving the dual problem and its performance is compared with the filtered back-projection algorithm, and with the primal formulation optimized by using surrogate functions. The 3D XCAT phantom and an analytical x-ray CT simulator are used to generate noise-free and noisy CT projection data set with monochromatic and polychromatic x-ray spectrums. The reconstructed images from the dual formulation delineate the internal structures at early iterations better than the primal formulation using surrogate functions. However the body contour is slower to converge in the dual than in the primal formulation. The dual formulation demonstrate better noise-resolution tradeoff near the internal organs than the primal formulation. Since the surrogate functions in general can provide a diagonal approximation of the Hessian matrix of the objective function, further convergence speed up may be achieved by deriving the surrogate function of the dual objective function.

  16. Automated maximum likelihood separation of signal from baseline in noisy quantal data.

    PubMed

    Bruno, William J; Ullah, Ghanim; Mak, Don-On Daniel; Pearson, John E

    2013-07-01

    Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system's behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225

  17. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  18. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, T.A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored-data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet-Crame??r-Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real-time water quality monitoring.

  19. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    PubMed Central

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  20. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis

    PubMed Central

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-01-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at. It is free and open to all users and there is no login requirement. PMID:27084950

  1. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  2. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    NASA Astrophysics Data System (ADS)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  3. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.

  4. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  5. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters.

    PubMed

    Li, Xinya; Deng, Z Daniel; Sun, Yannan; Martinez, Jayson J; Fu, Tao; McMichael, Geoffrey A; Carlson, Thomas J

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  6. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  7. Determination of instrumentation errors from measured data using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Klein, V.

    1980-01-01

    The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.

  8. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  9. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  10. Incorporation of spatial information in Bayesian image reconstruction - The maximum residual likelihood criterion

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1992-01-01

    We have developed a new figure of merit, a 'maximum-residual-likelihood' (MRL) statistic, for the goodness of fit for Bayesian image restoration which explicitly incorporates spatial information. The MRL constraint provides a natural means of incorporating the prior knowledge that the residuals contain no spatial structure through the autocorrelation function of the residuals. We demonstrate that this statistic follows a Chi-square distribution and that forcing this statistic to have its most probable value leads to a restored image whose residuals are consistent with the noise model. Our numerical experiments suggest that image restoration using the MRL statistic alone is numerically robust and produces results which are independent of the initial guess for the restored image. However, we caution that using the MRL statistic without an image prior can result in overresolution in low SNR portions of the image.

  11. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    DOE PAGESBeta

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; et al

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developedmore » using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.« less

  12. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters.

    PubMed

    Li, Xinya; Deng, Z Daniel; Sun, Yannan; Martinez, Jayson J; Fu, Tao; McMichael, Geoffrey A; Carlson, Thomas J

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  13. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    SciTech Connect

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; USA, Richland Washington; Carlson, Thomas J.; USA, Richland Washington

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  14. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.

    PubMed

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-07-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement.

  15. Real-time Data Acquisition and Maximum-Likelihood Estimation for Gamma Cameras

    PubMed Central

    Furenlid, L.R.; Hesterman, J.Y.; Barrett, H.H.

    2015-01-01

    We have developed modular gamma-ray cameras for biomedical imaging that acquire data with a raw list-mode acquisition architecture. All observations associated with a gamma-ray event, such as photomultiplier (PMT) signals and time, are assembled into an event packet and added to an ordered list of event entries that comprise the acquired data. In this work we present the design of the data-acquisition system, and discuss algorithms for a specialized computing engine to reside in the data path between the front and back ends of each camera and carry out maximum-likelihood position and energy estimations in real time while data was being acquired.. PMID:27066595

  16. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  17. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  18. A maximum likelihood analysis of the CoGeNT public dataset

    NASA Astrophysics Data System (ADS)

    Kelso, Chris

    2016-06-01

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  19. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  20. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    PubMed

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html). PMID:17973343

  1. Performance of default risk model with barrier option framework and maximum likelihood estimation: Evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chou, Heng-Chih; Wang, David

    2007-11-01

    We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.

  2. Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem

    NASA Astrophysics Data System (ADS)

    Urban, S.; Leitloff, J.; Hinz, S.

    2016-06-01

    In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.

  3. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    SciTech Connect

    Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.

    2004-05-25

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS.

  4. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    NASA Astrophysics Data System (ADS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-04-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.

  5. MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND

    SciTech Connect

    Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  6. Mapping correlation lengths of lower crustal heterogeneities together with their maximum-likelihood uncertainties

    NASA Astrophysics Data System (ADS)

    Carpentier, S. F. A.; Roy-Chowdhury, K.; Hurich, C. A.

    2011-07-01

    A novel statistical technique has been applied to the AG-48 seismic line from the LITHOPROBE dataset, passing through the Abitibi-Grenville Province, Canada. The method maps lateral stochastic parameters, representing von Karman heterogeneity distributions in the crust, estimated from migrated deep reflection data. Together with these parameters, viz. lateral correlation length and power-law exponent, we also compute their associated maximum-likelihood uncertainties. Combining normalised measurements of lateral correlation and their uncertainties, objective profiles of crustal fabric can be made. These profiles indicate significant spatial variations in macro-scale petrofabric. Correlation length, which carries useful information about structural scale lengths of heterogeneous bodies, is an especially robust parameter with moderate associated uncertainties. Although the general outline of the profiles conforms to the earlier tectonic interpretations (based on line-drawings), new features are visible too. The macro-scale petrofabric is seen to vary significantly within the tectonic terrains identified earlier — in depth as well as laterally. A consequence of this additional delineation is that we can explain an existing part of the AG-48 line equally well in terms of collisional deformation and in terms of the effects of a major shear zone projected onto a parallel segment of the line.

  7. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  8. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.

    2002-01-01

    The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate broken power law spectral parameters from real cosmic-ray data sets.

  9. Regularization design in penalized maximum-likelihood image reconstruction for lesion detection in 3D PET

    NASA Astrophysics Data System (ADS)

    Yang, Li; Zhou, Jian; Ferrero, Andrea; Badawi, Ramsey D.; Qi, Jinyi

    2014-01-01

    Detecting cancerous lesions is a major clinical application in emission tomography. In previous work, we have studied penalized maximum-likelihood (PML) image reconstruction for the detection task and proposed a method to design a shift-invariant quadratic penalty function to maximize detectability of a lesion at a known location in a two dimensional image. Here we extend the regularization design to maximize detectability of lesions at unknown locations in fully 3D PET. We used a multiview channelized Hotelling observer (mvCHO) to assess the lesion detectability in 3D images to mimic the condition where a human observer examines three orthogonal views of a 3D image for lesion detection. We derived simplified theoretical expressions that allow fast prediction of the detectability of a 3D lesion. The theoretical results were used to design the regularization in PML reconstruction to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the optimized penalty with the conventional penalty for detecting lesions of various sizes. Only true coincidence events were simulated. Lesion detectability was also assessed by two human observers, whose performances agree well with that of the mvCHO. Both the numerical observer and human observer results showed a statistically significant improvement in lesion detection by using the proposed penalty function compared to using the conventional penalty function.

  10. Statistical bounds and maximum likelihood performance for shot noise limited knife-edge modeled stellar occultation

    NASA Astrophysics Data System (ADS)

    McNicholl, Patrick J.; Crabtree, Peter N.

    2014-09-01

    Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.

  11. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  12. Gutenberg-Richter b-value maximum likelihood estimation and sample size

    NASA Astrophysics Data System (ADS)

    Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.

    2016-06-01

    The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.

  13. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  14. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  15. Genetic risk and longitudinal disease activity in systemic lupus erythematosus using targeted maximum likelihood estimation.

    PubMed

    Gianfrancesco, M A; Balzer, L; Taylor, K E; Trupin, L; Nititham, J; Seldin, M F; Singer, A W; Criswell, L A; Barcellos, L F

    2016-09-01

    Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal-targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprising 41 established SLE variants and clinically important disease activity as measured by the validated Systemic Lupus Activity Questionnaire (SLAQ) in a multiethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (>4.0) for individuals with a high GRS compared with those with a low GRS across nine time points after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking and education, as well as time-dependent confounding of missing visits. Individual single-nucleotide polymorphism (SNP) analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across time points eight and nine after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors. PMID:27467283

  16. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  17. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    SciTech Connect

    Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; Huh, Ji-Haeng

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio f{sub n}/f{sub p}=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f{sub n}/f{sub p}=−0.8.

  18. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  19. Improving water quality forecasting via data assimilation - Application of maximum likelihood ensemble filter to HSPF

    NASA Astrophysics Data System (ADS)

    Kim, Sunghee; Seo, Dong-Jun; Riazi, Hamideh; Shin, Changmin

    2014-11-01

    An ensemble data assimilation (DA) procedure is developed and evaluated for the Hydrologic Simulation Program - Fortran (HSPF), a widely used watershed water quality model. The procedure aims at improving the accuracy of short-range water quality prediction by updating the model initial conditions (IC) based on real-time observations of hydrologic and water quality variables. The observations assimilated include streamflow, biochemical oxygen demand (BOD), dissolved oxygen (DO), chlorophyll a (CHL-a), nitrate (NO3), phosphate (PO4) and water temperature (TW). The DA procedure uses the maximum likelihood ensemble filter (MLEF), which is capable of handling both nonlinear model dynamics and nonlinear observation equations, in a fixed-lag smoother formulation. For evaluation, the DA procedure was applied to the Kumho Catchment of the Nakdong River Basin in the Republic of Korea. A set of performance measures was used to evaluate analysis and prediction of streamflow and water quality variables. To remove systematic errors in the model simulation originating from structural and parametric errors, a parsimonious bias correction procedure is incorporated into the observation equation. The results show that the DA procedure substantially improves predictive skill for most variables; reduction in root mean square error ranges from 11% to 60% for Day-1 through 3 predictions for all observed variables except DO. It is seen that MLEF handles highly nonlinear hydrologic and biochemical observation equations very well, and that it is an effective DA technique for water quality forecasting.

  20. Least squares collocation with uncorrelated heterogeneous noise estimated by restricted maximum likelihood

    NASA Astrophysics Data System (ADS)

    Jarmołowski, Wojciech

    2015-06-01

    The article describes the estimation of a priori error associated with heterogeneous, non-correlated noise within one dataset. The errors are estimated by restricted maximum likelihood (REML). The solution is composed of a cross-validation technique named leave-one-out (LOO) and REML estimation of a priori noise for different groups obtained by LOO. A numerical test is the main part of this case study and it presents two options. In the first one, the whole data is split into two subsets using LOO, by finding potentially outlying data. Then a priori errors are estimated in groups for the better and worse subset, where the latter includes the mentioned outlying data. The second option was to select data from the neighborhood of each point and estimate two a priori errors by REML, one for the selected point and one for the surrounding group of data. Both ideas have been validated with the use of LOO performed only in points of the better subset from the first kind of test. The use of homogeneous noise in the two example sets leads to LOO standard deviations equal 1.83 and 1.54 mGal, respectively. The group estimation generates only small improvement at the level of 0.1 mGal, which can also be reached after the removal of worse points. The pointwise REML solution, however, provides LOO standard deviations that are at least 20 % smaller than statistics from the homogeneous noise application.

  1. Maximum-likelihood q-estimator uncovers the role of potassium at neuromuscular junctions.

    PubMed

    da Silva, A J; Trindade, M A S; Santos, D O C; Lima, R F

    2016-02-01

    Recently, we demonstrated the existence of nonextensive behavior in neuromuscular transmission (da Silva et al. in Phys Rev E 84:041925, 2011). In this letter, we first obtain a maximum-likelihood q-estimator to calculate the scale factor ([Formula: see text]) and the q-index of q-Gaussian distributions. Next, we use the indexes to analyze spontaneous miniature end plate potentials in electrophysiological recordings from neuromuscular junctions. These calculations were performed assuming both normal and high extracellular potassium concentrations [Formula: see text]. This protocol was used to test the validity of Tsallis statistics under electrophysiological conditions closely resembling physiological stimuli. The analysis shows that q-indexes are distinct depending on the extracellular potassium concentration. Our letter provides a general way to obtain the best estimate of parameters from a q-Gaussian distribution function. It also expands the validity of Tsallis statistics in realistic physiological stimulus conditions. In addition, we discuss the physical and physiological implications of these findings.

  2. Decision Feedback Partial Response Maximum Likelihood for Super-Resolution Media

    NASA Astrophysics Data System (ADS)

    Kasahara, Ryosuke; Ogata, Tetsuya; Kawasaki, Toshiyuki; Miura, Hiroshi; Yokoi, Kenya

    2007-06-01

    A decision feedback partial response maximum likelihood (PRML) for super-resolution media was developed. Decision feedback is used to compensate for nonlinear distortion in the readout signals of super-resolution media, making it possible to compensate for long-bit nonlinear distortion in small circuits. An field programmable gate array (FPGA) was fabricated with a decision feedback PRML, and a real-time bit error rate (bER) measuring system was developed. As a result, a bER of 4× 10-5 was achieved with an actual readout signal at the double density of a Blu-ray disc converted to the optical properties of the experimental setup using a red-laser system. Also, a bER of 1.5× 10-5 was achieved at double the density of an a high definition digital versatile disc read-only memory (HD DVD-ROM), and the radial and tangential tilt margins were measured in a blue-laser system.

  3. Maximum Likelihood Ensemble Filter-based Data Assimilation with HSPF for Improving Water Quality Forecasting

    NASA Astrophysics Data System (ADS)

    Kim, S.; Riazi, H.; Shin, C.; Seo, D.

    2013-12-01

    Due to the large dimensionality of the state vector and sparsity of observations, the initial conditions (IC) of water quality models are subject to large uncertainties. To reduce the IC uncertainties in operational water quality forecasting, an ensemble data assimilation (DA) procedure for the Hydrologic Simulation Program - Fortran (HSPF) model has been developed and evaluated for the Kumho River Subcatchment of the Nakdong River Basin in Korea. The procedure, referred to herein as MLEF-HSPF, uses maximum likelihood ensemble filter (MLEF) which combines strengths of variational assimilation (VAR) and ensemble Kalman filter (EnKF). The Control variables involved in the DA procedure include the bias correction factors for mean areal precipitation and mean areal potential evaporation, the hydrologic state variables, and the water quality state variables such as water temperature, dissolved oxygen (DO), biochemical oxygen demand (BOD), ammonium (NH4), nitrate (NO3), phosphate (PO4) and chlorophyll a (CHL-a). Due to the very large dimensionality of the inverse problem, accurately specifying the parameters for the DA procdedure is a challenge. Systematic sensitivity analysis is carried out for identifying the optimal parameter settings. To evaluate the robustness of MLEF-HSPF, we use multiple subcatchments of the Nakdong River Basin. In evaluation, we focus on the performance of MLEF-HSPF on prediction of extreme water quality events.

  4. Maximum likelihood estimation of vehicle position for outdoor image sensor-based visible light positioning system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang; Lin, Jiming

    2016-04-01

    Image sensor-based visible light positioning can be applied not only to indoor environments but also to outdoor environments. To determine the performance bounds of the positioning accuracy from the view of statistical optimization for an outdoor image sensor-based visible light positioning system, we analyze and derive the maximum likelihood estimation and corresponding Cramér-Rao lower bounds of vehicle position, under the condition that the observation values of the light-emitting diode (LED) imaging points are affected by white Gaussian noise. For typical parameters of an LED traffic light and in-vehicle camera image sensor, simulation results show that accurate estimates are available, with positioning error generally less than 0.1 m at a communication distance of 30 m between the LED array transmitter and the camera receiver. With the communication distance being constant, the positioning accuracy depends on the number of LEDs used, the focal length of the lens, the pixel size, and the frame rate of the camera receiver.

  5. Penalized maximum-likelihood sinogram restoration for dual focal spot computed tomography.

    PubMed

    Forthmann, P; Köhler, T; Begemann, P G C; Defrise, M

    2007-08-01

    Due to various system non-idealities, the raw data generated by a computed tomography (CT) machine are not readily usable for reconstruction. Although the deterministic nature of corruption effects such as crosstalk and afterglow permits correction by deconvolution, there is a drawback because deconvolution usually amplifies noise. Methods that perform raw data correction combined with noise suppression are commonly termed sinogram restoration methods. The need for sinogram restoration arises, for example, when photon counts are low and non-statistical reconstruction algorithms such as filtered backprojection are used. Many modern CT machines offer a dual focal spot (DFS) mode, which serves the goal of increased radial sampling by alternating the focal spot between two positions on the anode plate during the scan. Although the focal spot mode does not play a role with respect to how the data are affected by the above-mentioned corruption effects, it needs to be taken into account if regularized sinogram restoration is to be applied to the data. This work points out the subtle difference in processing that sinogram restoration for DFS requires, how it is correctly employed within the penalized maximum-likelihood sinogram restoration algorithm and what impact it has on image quality.

  6. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  7. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  8. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  9. Initial application of the maximum likelihood earthquake location method to early warning system in South Korea

    NASA Astrophysics Data System (ADS)

    Sheen, D. H.; Seong, Y. J.; Park, J. H.; Lim, I. S.

    2015-12-01

    From the early of this year, the Korea Meteorological Administration (KMA) began to operate the first stage of an earthquake early warning system (EEWS) and provide early warning information to the general public. The earthquake early warning system (EEWS) in the KMA is based on the Earthquake Alarm Systems version 2 (ElarmS-2), developed at the University of California Berkeley. This method estimates the earthquake location using a simple grid search algorithm that finds the location with the minimum variance of the origin time on successively finer grids. A robust maximum likelihood earthquake location (MAXEL) method for early warning, based on the equal differential times of P arrivals, was recently developed. The MAXEL has been demonstrated to be successful in determining the event location, even when an outlier is included in the small number of P arrivals. This presentation details the application of the MAXEL to the EEWS of the KMA, its performance evaluation over seismic networks in South Korea with synthetic data, and comparison of statistics of earthquake locations based on the ElarmS-2 and the MAXEL.

  10. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    PubMed Central

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  11. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  12. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    NASA Astrophysics Data System (ADS)

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  13. Performance, accuracy, and Web server for evolutionary placement of short sequence reads under maximum likelihood.

    PubMed

    Berger, Simon A; Krompass, Denis; Stamatakis, Alexandros

    2011-05-01

    We present an evolutionary placement algorithm (EPA) and a Web server for the rapid assignment of sequence fragments (short reads) to edges of a given phylogenetic tree under the maximum-likelihood model. The accuracy of the algorithm is evaluated on several real-world data sets and compared with placement by pair-wise sequence comparison, using edit distances and BLAST. We introduce a slow and accurate as well as a fast and less accurate placement algorithm. For the slow algorithm, we develop additional heuristic techniques that yield almost the same run times as the fast version with only a small loss of accuracy. When those additional heuristics are employed, the run time of the more accurate algorithm is comparable with that of a simple BLAST search for data sets with a high number of short query sequences. Moreover, the accuracy of the EPA is significantly higher, in particular when the sample of taxa in the reference topology is sparse or inadequate. Our algorithm, which has been integrated into RAxML, therefore provides an equally fast but more accurate alternative to BLAST for tree-based inference of the evolutionary origin and composition of short sequence reads. We are also actively developing a Web server that offers a freely available service for computing read placements on trees using the EPA.

  14. A composite likelihood approach for spatially correlated survival data.

    PubMed

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  15. A composite likelihood approach for spatially correlated survival data.

    PubMed

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  16. A composite likelihood approach for spatially correlated survival data

    PubMed Central

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  17. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    NASA Astrophysics Data System (ADS)

    Stsepankou, D.; Arns, A.; Ng, S. K.; Zygmanski, P.; Hesser, J.

    2012-10-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system.

  18. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  19. Predicting bulk permeability using outcrop fracture attributes: The benefits of a Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2015-12-01

    The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.

  20. Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density

    PubMed Central

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A.

    2009-01-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f0 = exp ϕ0 where ϕ0 is a concave function on ℝ. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log–concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, ∞) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of Hk, the “lower invelope” of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of ϕ0 = log f0 at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f0) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values. PMID:19881896

  1. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  2. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  3. Performance in population models for count data, part I: maximum likelihood approximations.

    PubMed

    Plan, Elodie L; Maloney, Alan; Trocóniz, Iñaki F; Karlsson, Mats O

    2009-08-01

    There has been little evaluation of maximum likelihood approximation methods for non-linear mixed effects modelling of count data. The aim of this study was to explore the estimation accuracy of population parameters from six count models, using two different methods and programs. Simulations of 100 data sets were performed in NONMEM for each probability distribution with parameter values derived from a real case study on 551 epileptic patients. Models investigated were: Poisson (PS), Poisson with Markov elements (PMAK), Poisson with a mixture distribution for individual observations (PMIX), Zero Inflated Poisson (ZIP), Generalized Poisson (GP) and Negative Binomial (NB). Estimations of simulated datasets were completed with Laplacian approximation (LAPLACE) in NONMEM and LAPLACE/Gaussian Quadrature (GQ) in SAS. With LAPLACE, the average absolute value of the bias (AVB) in all models was 1.02% for fixed effects, and ranged 0.32-8.24% for the estimation of the random effect of the mean count (lambda). The random effect of the overdispersion parameter present in ZIP, GP and NB was underestimated (-25.87, -15.73 and -21.93% of relative bias, respectively). Analysis with GQ 9 points resulted in an improvement in these parameters (3.80% average AVB). Methods implemented in SAS had a lower fraction of successful minimizations, and GQ 9 points was considerably slower than 1 point. Simulations showed that parameter estimates, even when biased, resulted in data that were only marginally different from data simulated from the true model. Thus all methods investigated appear to provide useful results for the investigated count data models. PMID:19653080

  4. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGESBeta

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally

  5. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    SciTech Connect

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of

  6. Dose reduction in digital breast tomosynthesis using a penalized maximum likelihood reconstruction

    NASA Astrophysics Data System (ADS)

    Das, Mini; Gifford, Howard; O'Connor, Michael; Glick, Stephen J.

    2009-02-01

    Digital breast tomosynthesis (DBT) is a 3D imaging modality with limited angle projection data. The ability of tomosynthesis systems to accurately detect smaller microcalcifications is debatable. This is because of the higher noise in the projection data (lower average dose per projection), which is then propagated through the reconstructed image . Reconstruction methods that minimize the propagation of quantum noise have potential to improve microcalcification detectability using DBT. In this paper we show that penalized maximum likelihood (PML) reconstruction in DBT yields images with an improved resolution/noise tradeoff as compared to conventional filtered backprojection (FBP). Signal to noise ratio (SNR) using PML was observed to be higher than that obtained using the standard FBP algorithm. Our results indicate that for microcalcifications, using the PML algorithm, reconstructions obtained with a mean glandular dose (MGD) of 1.5 mGy yielded better SNR than that those obtained with FBP using a 4mGy total dose. Thus perhaps total dose could be reduced to one-third or lower with same microcalcification detectability, if PML reconstruction is used instead of FBP. Visibility of low contrast masses with various contrast levels were studied using a contrast-detail phantom in a breast shape structure with an average breast density. Images generated using various dose levels indicate that visibility of low contrast masses generated using PML reconstructions are significantly better than those generated using FBP. SNR measurements in the low-contrast study did not appear to correlate with the visual subjective analysis of the reconstruction indicating that SNR is not a good figure of merit to be used.

  7. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  8. Fusion of hyperspectral and lidar data based on dimension reduction and maximum likelihood

    NASA Astrophysics Data System (ADS)

    Abbasi, B.; Arefi, H.; Bigdeli, B.; Motagh, M.; Roessner, S.

    2015-04-01

    Limitations and deficiencies of different remote sensing sensors in extraction of different objects caused fusion of data from different sensors to become more widespread for improving classification results. Using a variety of data which are provided from different sensors, increase the spatial and the spectral accuracy. Lidar (Light Detection and Ranging) data fused together with hyperspectral images (HSI) provide rich data for classification of the surface objects. Lidar data representing high quality geometric information plays a key role for segmentation and classification of elevated features such as buildings and trees. On the other hand, hyperspectral data containing high spectral resolution would support high distinction between the objects having different spectral information such as soil, water, and grass. This paper presents a fusion methodology on Lidar and hyperspectral data for improving classification accuracy in urban areas. In first step, we applied feature extraction strategies on each data separately. In this step, texture features based on GLCM (Grey Level Co-occurrence Matrix) from Lidar data and PCA (Principal Component Analysis) and MNF (Minimum Noise Fraction) based dimension reduction methods for HSI are generated. In second step, a Maximum Likelihood (ML) based classification method is applied on each feature spaces. Finally, a fusion method is applied to fuse the results of classification. A co-registered hyperspectral and Lidar data from University of Houston was utilized to examine the result of the proposed method. This data contains nine classes: Building, Tree, Grass, Soil, Water, Road, Parking, Tennis Court and Running Track. Experimental investigation proves the improvement of classification accuracy to 88%.

  9. Bias Reduction for Low-Statistics PET: Maximum Likelihood Reconstruction With a Modified Poisson Distribution

    PubMed Central

    Van Slambrouck, Katrien; Stute, Simon; Comtat, Claude; Sibomana, Merence; van Velden, Floris H. P.; Boellaard, Ronald

    2015-01-01

    Positron emission tomography data are typically reconstructed with maximum likelihood expectation maximization (MLEM). However, MLEM suffers from positive bias due to the non-negativity constraint. This is particularly problematic for tracer kinetic modeling. Two reconstruction methods with bias reduction properties that do not use strict Poisson optimization are presented and compared to each other, to filtered backprojection (FBP), and to MLEM. The first method is an extension of NEGML, where the Poisson distribution is replaced by a Gaussian distribution for low count data points. The transition point between the Gaussian and the Poisson regime is a parameter of the model. The second method is a simplification of ABML. ABML has a lower and upper bound for the reconstructed image whereas AML has the upper bound set to infinity. AML uses a negative lower bound to obtain bias reduction properties. Different choices of the lower bound are studied. The parameter of both algorithms determines the effectiveness of the bias reduction and should be chosen large enough to ensure bias-free images. This means that both algorithms become more similar to least squares algorithms, which turned out to be necessary to obtain bias-free reconstructions. This comes at the cost of increased variance. Nevertheless, NEGML and AML have lower variance than FBP. Furthermore, randoms handling has a large influence on the bias. Reconstruction with smoothed randoms results in lower bias compared to reconstruction with unsmoothed randoms or randoms precorrected data. However, NEGML and AML yield both bias-free images for large values of their parameter. PMID:25137726

  10. Maximum likelihood fitting of tidal streams with application to the Sagittarius dwarf tidal tails

    NASA Astrophysics Data System (ADS)

    Cole, Nathan

    2009-06-01

    A maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid is presented. Over small spatial extent, the tidal debris is modeled as a cylinder with density that falls off as a Gaussian with distance from its axis while the smooth component of the stellar spheroid is modeled as a Hernquist profile. The method is designed to use 2.5° wide stripes of data that follow great circles across the sky in which the tidal debris within each stripe is fit separately. A probabilistic separation technique which allows for the extraction of the optimized tidal streams from the input data set is presented. This technique allows for the creation of separate catalogs for each component fit in the stellar spheroid: one catalog for each piece of tidal debris that fits the density profile of the debris and a single catalog which fits the density profile of the smooth stellar spheroid component. This separation technique is proven to be effective by extracting the simulated tidal debris from the simulated datasets. A method to determine the statistical errors is also developed which utilizes a Hessian matrix to determine the width of the peak at the maximum of the likelihood surface. This error analysis method serves as a means of testing the the algorithm with regard to the simulated datasets as well as determining the statistical errors of the optimizations over observational data. An heuristic method is also defined for determining the numerical error in the optimizations. The maximum likelihood algorithm is then used to optimize spatial data taken from the Sloan Digital Sky Survey. Stars having the color of blue F turnoff stars 0.1 < ( g - r ) 0 < 0.3 and ( u - g ) 0 > 0.4 are extracted from the Sloan Digital Sky Survey database. In the algorithm, the absolute magnitude distribution of F turnoff stars is modeled as a Gaussian distribution, which is an improvement over previous methods which utilize a fixed absolute magnitude M g 0

  11. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets.

  12. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  13. Efficient Full Information Maximum Likelihood Estimation for Multidimensional IRT Models. Research Report. ETS RR-09-03

    ERIC Educational Resources Information Center

    Rijmen, Frank

    2009-01-01

    Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…

  14. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from

  15. A penalized likelihood approach for mixture cure models.

    PubMed

    Corbière, Fabien; Commenges, Daniel; Taylor, Jeremy M G; Joly, Pierre

    2009-02-01

    Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non-susceptible individuals that will never experience it. Important issues in mixture cure models are estimation of the baseline survival function for susceptibles and estimation of the variance of the regression parameters. The aim of this paper is to propose a penalized likelihood approach, which allows for flexible modeling of the hazard function for susceptible individuals using M-splines. This approach also permits direct computation of the variance of parameters using the inverse of the Hessian matrix. Properties and limitations of the proposed method are discussed and an illustration from a cancer study is presented.

  16. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated.

    PubMed

    Langlois, Dominic; Cousineau, Denis; Thivierge, J P

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  17. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated

    NASA Astrophysics Data System (ADS)

    Langlois, Dominic; Cousineau, Denis; Thivierge, J. P.

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  18. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  19. Estimation of the Unextendable Dead Time Period in a Flow of Physical Events by the Method of Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Nezhel'skaya, L. A.

    2016-09-01

    A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.

  20. Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds

    USGS Publications Warehouse

    Conroy, M.J.; Morgan, B.J.T.; North, P.M.

    1985-01-01

    It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.

  1. The evolution of autodigestion in the mushroom family Psathyrellaceae (Agaricales) inferred from Maximum Likelihood and Bayesian methods.

    PubMed

    Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba

    2010-12-01

    Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution.

  2. A Better Lemon Squeezer? Maximum-Likelihood Regression with Beta-Distributed Dependent Variables

    ERIC Educational Resources Information Center

    Smithson, Michael; Verkuilen, Jay

    2006-01-01

    Uncorrectable skew and heteroscedasticity are among the "lemons" of psychological data, yet many important variables naturally exhibit these properties. For scales with a lower and upper bound, a suitable candidate for models is the beta distribution, which is very flexible and models skew quite well. The authors present maximum-likelihood…

  3. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  4. More than you may want to know about maximum likelihood estimation. [applied to aircraft control and stability

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1984-01-01

    A discussion is undertaken concerning the maximum likelihood estimator and the aircraft equations of motion it employs, with attention to the application of the concepts of minimization and estimation to a simple computed aircraft example. Graphic representations are given for the cost functions to help illustrate the minimization process. The basic concepts are then generalized, and estimations obtained from flight data are evaluated. The example considered shows the advantage of low measurement noise, multiple estimates at a given condition, the Cramer-Rao bounds, and the quality of the match between measured and computed data.

  5. Performance evaluation of signal dependent noise predictive maximum likelihood detector for two-dimensional magnetic recording read/write channel

    NASA Astrophysics Data System (ADS)

    Kawamura, S.; Okamoto, Y.; Nakamura, Y.; Osawa, H.; Kanai, Y.; Muraoaka, H.

    2015-05-01

    Two-dimensional magnetic recording is affected by the inter-track interference (ITI) from the adjacent tracks. We investigate the improvement of partial response maximum likelihood (PRML) systems with signal dependent noise predictor (SDNP) in the bit error rate performance. The systems reduce the influence of ITI by two dimensional finite impulse response filter using the waveforms reproduced by triple readers from the adjacent tracks. The results show that the SDNP provides larger improvement to PR(2,6,1)ML system compare with PR1ML system.

  6. Maximum likelihood inference implies a high, not a low, ancestral haploid chromosome number in Araceae, with a critique of the bias introduced by ‘x’

    PubMed Central

    Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.

    2012-01-01

    Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850

  7. MEMLET: An Easy-to-Use Tool for Data Fitting and Model Comparison Using Maximum-Likelihood Estimation.

    PubMed

    Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael

    2016-07-26

    We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130

  8. A flexible decision-aided maximum likelihood phase estimation in hybrid QPSK/OOK coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-04-01

    Although decision-aided (DA) maximum likelihood (ML) phase estimation (PE) algorithm has been investigated intensively, block length effect impacts system performance and leads to the increasing of hardware complexity. In this paper, a flexible DA-ML algorithm is proposed in hybrid QPSK/OOK coherent optical wavelength division multiplexed (WDM) systems. We present a general cross phase modulation (XPM) model based on Volterra series transfer function (VSTF) method to describe XPM effects induced by OOK channels at the end of dispersion management (DM) fiber links. Based on our model, the weighted factors obtained from maximum likelihood method are introduced to eliminate the block length effect. We derive the analytical expression of phase error variance for the performance prediction of coherent receiver with the flexible DA-ML algorithm. Bit error ratio (BER) performance is evaluated and compared through both theoretical derivation and Monte Carlo (MC) simulation. The results show that our flexible DA-ML algorithm has significant improvement in performance compared with the conventional DA-ML algorithm as block length is a fixed value. Compared with the conventional DA-ML with optimum block length, our flexible DA-ML can obtain better system performance. It means our flexible DA-ML algorithm is more effective for mitigating phase noise than conventional DA-ML algorithm.

  9. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  10. Important factors in the maximum likelihood analysis of flight test maneuvers

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.; Montgomery, T. D.

    1979-01-01

    The information presented is based on the experience in the past 12 years at the NASA Dryden Flight Research Center of estimating stability and control derivatives from over 3500 maneuvers from 32 aircraft. The overall approach to the analysis of dynamic flight test data is outlined. General requirements for data and instrumentation are discussed and several examples of the types of problems that may be encountered are presented.

  11. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results

  12. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  13. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  14. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  15. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  16. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  17. Maximum likelihood method to correct for missed levels based on the {Delta}{sub 3}(L) statistic

    SciTech Connect

    Mulhall, Declan

    2011-05-15

    The {Delta}{sub 3}(L) statistic of random matrix theory is defined as the average of a set of random numbers {l_brace}{delta}{r_brace}, derived from a spectrum. The distribution p({delta}) of these random numbers is used as the basis of a maximum likelihood method to gauge the fraction x of levels missed in an experimental spectrum. The method is tested on an ensemble of depleted spectra from the Gaussian orthogonal ensemble (GOE) and accurately returned the correct fraction of missed levels. Neutron resonance data and acoustic spectra of an aluminum block were analyzed. All results were compared with an analysis based on an established expression for {Delta}{sub 3}(L) for a depleted GOE spectrum. The effects of intruder levels are examined and seen to be very similar to those of missed levels. Shell model spectra were seen to give the same p({delta}) as the GOE.

  18. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  19. A real-time signal combining system for Ka-band feed arrays using maximum-likelihood weight estimates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1990-01-01

    A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.

  20. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    NASA Astrophysics Data System (ADS)

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.

    2016-08-01

    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  1. Modeling and Maximum Likelihood Fitting of Gamma-Ray and Radio Light Curves of Millisecond Pulsars Detected with Fermi

    NASA Technical Reports Server (NTRS)

    Johnson, T. J.; Harding, A. K.; Venter, C.

    2012-01-01

    Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.

  2. Design optimization of the proximity focusing RICH with dual aerogel radiator using a maximum-likelihood analysis of Cherenkov rings

    NASA Astrophysics Data System (ADS)

    Pestotnik, R.; Križan, P.; Korpar, S.; Iijima, T.

    2008-09-01

    The use of a sequence of aerogel radiators with different refractive indices in a proximity focusing Cherenkov ring imaging detector has been shown to improve the resolution of the Cherenkov angle. In order to obtain further information on the capabilities of such a detector, a maximum-likelihood analysis has been performed on simulated data, with the simulation being appropriate for the upgraded Belle detector. The results show that by using a sequence of two aerogel layers with different refractive indices, the K/π separation efficiency is improved in the kinematic region above 3 GeV/ c. In the low momentum region, the focusing configuration (with n1 and n2 chosen such that the Cherenkov rings from different aerogel layers at 4 GeV/ c overlap) shows a better performance than the defocusing one (where the two Cherenkov rings are well separated).

  3. Approach trajectory planning system for maximum concealment

    NASA Technical Reports Server (NTRS)

    Warner, David N., Jr.

    1986-01-01

    A computer-simulation study was undertaken to investigate a maximum concealment guidance technique (pop-up maneuver), which military aircraft may use to capture a glide path from masked, low-altitude flight typical of terrain following/terrain avoidance flight enroute. The guidance system applied to this problem is the Fuel Conservative Guidance System. Previous studies using this system have concentrated on the saving of fuel in basically conventional land and ship-based operations. Because this system is based on energy-management concepts, it also has direct application to the pop-up approach which exploits aircraft performance. Although the algorithm was initially designed to reduce fuel consumption, the commanded deceleration is at its upper limit during the pop-up and, therefore, is a good approximation of a minimum-time solution. Using the model of a powered-lift aircraft, the results of the study demonstrated that guidance commands generated by the system are well within the capability of an automatic flight-control system. Results for several initial approach conditions are presented.

  4. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus

    PubMed Central

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-01-01

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579

  5. MLE (Maximum Likelihood Estimator) reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    SciTech Connect

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data.

  6. Molecular systematics of armadillos (Xenarthra, Dasypodidae): contribution of maximum likelihood and Bayesian analyses of mitochondrial and nuclear genes.

    PubMed

    Delsuc, Frédéric; Stanhope, Michael J; Douzery, Emmanuel J P

    2003-08-01

    The 30 living species of armadillos, anteaters, and sloths (Mammalia: Xenarthra) represent one of the three major clades of placentals. Armadillos (Cingulata: Dasypodidae) are the earliest and most speciose xenarthran lineage with 21 described species. The question of their tricky phylogeny was here studied by adding two mitochondrial genes (NADH dehydrogenase subunit 1 [ND1] and 12S ribosomal RNA [12S rRNA]) to the three protein-coding nuclear genes (alpha2B adrenergic receptor [ADRA2B], breast cancer susceptibility exon 11 [BRCA1], and von Willebrand factor exon 28 [VWF]) yielding a total of 6869 aligned nucleotide sites for thirteen xenarthran species. The two mitochondrial genes were characterized by marked excesses of transitions over transversions-with a strong bias toward CT transitions for the 12S rRNA-and exhibited two- to fivefold faster evolutionary rates than the fastest nuclear gene (ADRA2B). Maximum likelihood and Bayesian phylogenetic analyses supported the monophyly of Dasypodinae, Tolypeutinae, and Euphractinae, with the latter two armadillo subfamilies strongly clustering together. Conflicting branching points between individual genes involved relationships within the subfamilies Tolypeutinae and Euphractinae. Owing to a greater number of informative sites, the overall concatenation favored the mitochondrial topology with the classical grouping of Cabassous and Priodontes within Tolypeutinae, and a close relationship between Euphractus and Chaetophractus within Euphractinae. However, low statistical support values associated with almost equal distributions of apomorphies among alternatives suggested that two parallel events of rapid speciation occurred within these two armadillo subfamilies.

  7. Bounds for Maximum Likelihood Regular and Non-Regular DoA Estimation in K-Distributed Noise

    NASA Astrophysics Data System (ADS)

    Abramovich, Yuri I.; Besson, Olivier; Johnson, Ben A.

    2015-11-01

    We consider the problem of estimating the direction of arrival of a signal embedded in $K$-distributed noise, when secondary data which contains noise only are assumed to be available. Based upon a recent formula of the Fisher information matrix (FIM) for complex elliptically distributed data, we provide a simple expression of the FIM with the two data sets framework. In the specific case of $K$-distributed noise, we show that, under certain conditions, the FIM for the deterministic part of the model can be unbounded, while the FIM for the covariance part of the model is always bounded. In the general case of elliptical distributions, we provide a sufficient condition for unboundedness of the FIM. Accurate approximations of the FIM for $K$-distributed noise are also derived when it is bounded. Additionally, the maximum likelihood estimator of the signal DoA and an approximated version are derived, assuming known covariance matrix: the latter is then estimated from secondary data using a conventional regularization technique. When the FIM is unbounded, an analysis of the estimators reveals a rate of convergence much faster than the usual $T^{-1}$. Simulations illustrate the different behaviors of the estimators, depending on the FIM being bounded or not.

  8. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus.

    PubMed

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-04-20

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.

  9. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus.

    PubMed

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-01-01

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579

  10. Investigation of optimal parameters for penalized maximum-likelihood reconstruction applied to iodinated contrast-enhanced breast CT

    NASA Astrophysics Data System (ADS)

    Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.

    2016-03-01

    Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.

  11. Maximum likelihood model based on minor allele frequencies and weighted Max-SAT formulation for haplotype assembly.

    PubMed

    Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser

    2014-06-01

    Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results. PMID:24491253

  12. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement.

    PubMed

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  13. Separating components of variation in measurement series using maximum likelihood estimation. Application to patient position data in radiotherapy

    NASA Astrophysics Data System (ADS)

    Sage, J. P.; Mayles, W. P. M.; Mayles, H. M.; Syndikus, I.

    2014-10-01

    Maximum likelihood estimation (MLE) is presented as a statistical tool to evaluate the contribution of measurement error to any measurement series where the same quantity is measured using different independent methods. The technique was tested against artificial data sets; generated for values of underlying variation in the quantity and measurement error between 0.5 mm and 3 mm. In each case the simulation parameters were determined within 0.1 mm. The technique was applied to analyzing external random positioning errors from positional audit data for 112 pelvic radiotherapy patients. Patient position offsets were measured using portal imaging analysis and external body surface measures. Using MLE to analyze all methods in parallel it was possible to ascertain the measurement error for each method and the underlying positional variation. In the (AP / Lat / SI) directions the standard deviations of the measured patient position errors from portal imaging were (3.3 mm / 2.3 mm / 1.9 mm), arising from underlying variations of (2.7 mm / 1.5 mm / 1.4 mm) and measurement uncertainties of (1.8 mm / 1.8 mm / 1.3 mm), respectively. The measurement errors agree well with published studies. MLE used in this manner could be applied to any study in which the same quantity is measured using independent methods.

  14. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  15. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  16. First-order correct bootstrap support adjustments for splits that allow hypothesis testing when using maximum likelihood estimation.

    PubMed

    Susko, Edward

    2010-07-01

    The most frequent measure of phylogenetic uncertainty for splits is bootstrap support. Although large bootstrap support intuitively suggests that a split in a tree is well supported, it has not been clear how large bootstrap support needs to be to conclude that there is significant evidence that a hypothesized split is present. Indeed, recent work has shown that bootstrap support is not first-order correct and thus cannot be directly used for hypothesis testing. We present methods that adjust bootstrap support values in a maximum likelihood (ML) setting so that they have an interpretation corresponding to P values in conventional hypothesis testing; for instance, adjusted bootstrap support larger than 95% occurs only 5% of the time if the split is not present. Through examples and simulation settings, it is found that adjustments always increase the level of support. We also find that the nature of the adjustment is fairly constant across parameter settings. Finally, we consider adjustments that take into account the data-dependent nature of many hypotheses about splits: the hypothesis that they are present is being tested because they are in the tree estimated through ML. Here, in contrast, we find that bootstrap probability often needs to be adjusted downwards.

  17. ROC (Receiver Operating Characteristics) study of maximum likelihood estimator human brain image reconstructions in PET (Positron Emission Tomography) clinical practice

    SciTech Connect

    Llacer, J.; Veklerov, E.; Nolan, D. ); Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J. )

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of {sup 18}F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab.

  18. Decision-aided maximum likelihood phase estimation with optimum block length in hybrid QPSK/16QAM coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-01-01

    We propose a general model to entirely describe XPM effects induced by 16QAM channels in hybrid QPSK/16QAM wavelength division multiplexed (WDM) systems. A power spectral density (PSD) formula is presented to predict the statistical properties of XPM effects at the end of dispersion management (DM) fiber links. We derive the analytical expression of phase error variance for optimizing block length of QPSK channel coherent receiver with decision-aided (DA) maximum-likelihood (ML) phase estimation (PE). With our theoretical analysis, the optimum block length can be employed to improve the performance of coherent receiver. Bit error rate (BER) performance in QPSK channel is evaluated and compared through both theoretical derivation and Monte Carlo simulation. The results show that by using the DA-ML with optimum block length, bit signal-to-noise ratio (SNR) improvement over DA-ML with fixed block length of 10, 20 and 40 at BER of 10-3 is 0.18 dB, 0.46 dB and 0.65 dB, respectively, when in-line residual dispersion is 0 ps/nm.

  19. Maximum-likelihood estimation of lithospheric flexural rigidity, initial-loading fraction and load correlation, under isotropy

    NASA Astrophysics Data System (ADS)

    Simons, Frederik J.; Olhede, Sofia C.

    2013-06-01

    Topography and gravity are geophysical fields whose joint statistical structure derives from interface-loading processes modulated by the underlying mechanics of isostatic and flexural compensation in the shallow lithosphere. Under this dual statistical-mechanistic viewpoint an estimation problem can be formulated where the knowns are topography and gravity and the principal unknown the elastic flexural rigidity of the lithosphere. In the guise of an equivalent `effective elastic thickness', this important, geographically varying, structural parameter has been the subject of many interpretative studies, but precisely how well it is known or how best it can be found from the data, abundant nonetheless, has remained contentious and unresolved throughout the last few decades of dedicated study. The popular methods whereby admittance or coherence, both spectral measures of the relation between gravity and topography, are inverted for the flexural rigidity, have revealed themselves to have insufficient power to independently constrain both it and the additional unknown initial-loading fraction and load-correlation factors, respectively. Solving this extremely ill-posed inversion problem leads to non-uniqueness and is further complicated by practical considerations such as the choice of regularizing data tapers to render the analysis sufficiently selective both in the spatial and spectral domains. Here, we rewrite the problem in a form amenable to maximum-likelihood estimation theory, which we show yields unbiased, minimum-variance estimates of flexural rigidity, initial-loading fraction and load correlation, each of those separably resolved with little a posteriori correlation between their estimates. We are also able to separately characterize the isotropic spectral shape of the initial-loading processes. Our procedure is well-posed and computationally tractable for the two-interface case. The resulting algorithm is validated by extensive simulations whose behaviour is

  20. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.

    2016-04-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.

  1. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    PubMed

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  2. Maximum-likelihood estimation of scatter components algorithm for x-ray coherent scatter computed tomography of the breast.

    PubMed

    Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M

    2016-04-21

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented. PMID:27025665

  3. Root of the Eukaryota tree as inferred from combined maximum likelihood analyses of multiple molecular sequence data.

    PubMed

    Arisue, Nobuko; Hasegawa, Masami; Hashimoto, Tetsuo

    2005-03-01

    Extensive studies aiming to establish the structure and root of the Eukaryota tree by phylogenetic analyses of molecular sequences have thus far not resulted in a generally accepted tree. To re-examine the eukaryotic phylogeny using alternative genes, and to obtain a more robust inference for the root of the tree as well as the relationship among major eukaryotic groups, we sequenced the genes encoding isoleucyl-tRNA and valyl-tRNA synthetases, cytosolic-type heat shock protein 90, and the largest subunit of RNA polymerase II from several protists. Combined maximum likelihood analyses of 22 protein-coding genes including the above four genes clearly demonstrated that Diplomonadida and Parabasala shared a common ancestor in the rooted tree of Eukaryota, but only when the fast-evolving sites were excluded from the original data sets. The combined analyses, together with recent findings on the distribution of a fused dihydrofolate reductase-thymidylate synthetase gene, narrowed the possible position of the root of the Eukaryota tree on the branch leading to Opisthokonta or to the common ancestor of Diplomonadida/Parabasala. However, the analyses did not agree with the position of the root located on the common ancestor of Opisthokonta and Amoebozoa, which was argued by Stechmann and Cavalier-Smith [Curr. Biol. 13:R665-666, 2003] based on the presence or absence of a three-gene fusion of the pyrimidine biosynthetic pathway: carbamoyl-phosphate synthetase II, dihydroorotase, and aspartate carbamoyltransferase. The presence of the three-gene fusion recently found in the Cyanidioschyzon merolae (Rhodophyta) genome sequence data supported our analyses against the Stechmann and Cavalier-Smith-rooting in 2003.

  4. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  5. PC-based hardware implementation of the maximum-likelihood classifier for the shuttle ice detection system

    NASA Astrophysics Data System (ADS)

    Jaggi, Sandeep

    1991-06-01

    This paper describes a PC-based near-real time implementation of a two- channel maximum likelihood classifier. A prototype for the detection of ice formation on the External Tank (ET) of the Space Shuttle is being developed for NASA Science and Technology Laboratory by Lockheed Engineering and Sciences Company at Stennis Space Center, MS. Various studies have been conducted to obtain regions in the mid-infrared and the infrared part of the electromagnetic spectrum that show a difference in the reflectance characteristics of the ET surface when it is covered with ice, frost or water. These studies resulted in the selection of two channels of the spectrum for differentiating between various phases of water using imagery data. The objective is to be able to do an online classification of the ET images into distinct regions denoting ice, frost, wet or dry areas. The images are acquired with an infrared camera and digitized before being processed by a computer to yield a fast color-coded pattern, with each color representing a region. A two- monitor PC-based setup is used for image processing. Various techniques for classification, both supervised and unsupervised, are being investigated for developing a methodology. This paper discusses the implementation of a supervised classification technique. The statistical distribution of the reflectance characteristics of ice, frost, water formation on Spray-on-Foam-Insulation (SOFI), that covers the ET surface, are acquired. These statistics are later used for classification. The computer can be set in either a training mode or classifying mode. In the training mode, it learns the statistics of the various classes. In the classifying mode, it produced a color-coded image denoting the respective categories of classification. The results of the classifier are memory-mapped for efficiency. The speed of the classification process is only limited by the speed of the digital frame grabber and the software that interfaces the frame

  6. A Combined Maximum-likelihood Analysis of the High-energy Astrophysical Neutrino Flux Measured with IceCube

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Classen, L.; Coenders, S.; Cowen, D. F.; Cruz Silva, A. H.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dumm, J. P.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fahey, S.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Goodman, J. A.; Góra, D.; Grant, D.; Gretskov, P.; Groh, J. C.; Gross, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hellwig, D.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jero, K.; Jurkovic, M.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Kohnen, G.; Kolanoski, H.; Konietz, R.; Koob, A.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Middlemas, E.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Pütz, J.; Quinnan, M.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schimp, M.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Seckel, D.; Seunarine, S.; Shanidze, R.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stanisha, N. A.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Strahler, E. A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Tosi, D.; Tselengidou, M.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Whitehorn, N.; Wichary, C.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.; IceCube Collaboration

    2015-08-01

    Evidence for an extraterrestrial flux of high-energy neutrinos has now been found in multiple searches with the IceCube detector. The first solid evidence was provided by a search for neutrino events with deposited energies ≳ 30 TeV and interaction vertices inside the instrumented volume. Recent analyses suggest that the extraterrestrial flux extends to lower energies and is also visible with throughgoing, νμ-induced tracks from the Northern Hemisphere. Here, we combine the results from six different IceCube searches for astrophysical neutrinos in a maximum-likelihood analysis. The combined event sample features high-statistics samples of shower-like and track-like events. The data are fit in up to three observables: energy, zenith angle, and event topology. Assuming the astrophysical neutrino flux to be isotropic and to consist of equal flavors at Earth, the all-flavor spectrum with neutrino energies between 25 TeV and 2.8 PeV is well described by an unbroken power law with best-fit spectral index -2.50 ± 0.09 and a flux at 100 TeV of ({6.7}-1.2+1.1)× {10}-18 {{GeV}}-1 {{{s}}}-1 {{sr}}-1 {{cm}}-2. Under the same assumptions, an unbroken power law with index -2 is disfavored with a significance of 3.8σ (p = 0.0066%) with respect to the best fit. This significance is reduced to 2.1σ (p = 1.7%) if instead we compare the best fit to a spectrum with index -2 that has an exponential cut-off at high energies. Allowing the electron-neutrino flux to deviate from the other two flavors, we find a νe fraction of 0.18 ± 0.11 at Earth. The sole production of electron neutrinos, which would be characteristic of neutron-decay-dominated sources, is rejected with a significance of 3.6σ (p = 0.014%).

  7. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models

    PubMed Central

    Ibrahim, Joseph G.

    2014-01-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR. PMID:25309677

  8. Maximum likelihood estimation of time to first event in the presence of data gaps and multiple events.

    PubMed

    Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W

    2016-04-01

    We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.

  9. Maximum-likelihood joint image reconstruction and motion estimation with misaligned attenuation in TOF-PET/CT

    NASA Astrophysics Data System (ADS)

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F.; Thielemans, Kris

    2016-02-01

    This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.

  10. Maximum-likelihood joint image reconstruction and motion estimation with misaligned attenuation in TOF-PET/CT.

    PubMed

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F; Thielemans, Kris

    2016-02-01

    This work is an extension of our recent work on joint activity reconstruction/motion estimation (JRM) from positron emission tomography (PET) data. We performed JRM by maximization of the penalized log-likelihood in which the probabilistic model assumes that the same motion field affects both the activity distribution and the attenuation map. Our previous results showed that JRM can successfully reconstruct the activity distribution when the attenuation map is misaligned with the PET data, but converges slowly due to the significant cross-talk in the likelihood. In this paper, we utilize time-of-flight PET for JRM and demonstrate that the convergence speed is significantly improved compared to JRM with conventional PET data.

  11. A robust channel-calibration algorithm for multi-channel in azimuth HRWS SAR imaging based on local maximum-likelihood weighted minimum entropy.

    PubMed

    Zhang, Shuang-Xi; Xing, Meng-Dao; Xia, Xiang-Gen; Liu, Yan-Yang; Guo, Rui; Bao, Zheng

    2013-12-01

    High-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) is an essential tool for modern remote sensing. To effectively deal with the contradiction problem between high-resolution and low pulse repetition frequency and obtain an HRWS SAR image, a multi-channel in azimuth SAR system has been adopted in the literature. However, the performance of the Doppler ambiguity suppression via digital beam forming processing suffers the losses from the channel mismatch. In this paper, a robust channel-calibration algorithm based on weighted minimum entropy is proposed for the multi-channel in azimuth HRWS SAR imaging. The proposed algorithm is implemented by a two-step process. 1) The timing uncertainty in each channel and most of the range-invariant channel mismatches in amplitude and phase have been corrected in the pre-processing of the coarse-compensation. 2) After the pre-processing, there is only residual range-dependent channel mismatch in phase. Then, the retrieval of the range-dependent channel mismatch in phase is achieved by a local maximum-likelihood weighted minimum entropy algorithm. The simulated multi-channel in azimuth HRWS SAR data experiment is adopted to evaluate the performance of the proposed algorithm. Then, some real measured airborne multi-channel in azimuth HRWS Scan-SAR data is used to demonstrate the effectiveness of the proposed approach. PMID:23893723

  12. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  13. Beyond the fisher-matrix formalism: exact sampling distributions of the maximum-likelihood estimator in gravitational-wave parameter estimation.

    PubMed

    Vallisneri, Michele

    2011-11-01

    Gravitational-wave astronomers often wish to characterize the expected parameter-estimation accuracy of future observations. The Fisher matrix provides a lower bound on the spread of the maximum-likelihood estimator across noise realizations, as well as the leading-order width of the posterior probability, but it is limited to high signal strengths often not realized in practice. By contrast, Monte Carlo Bayesian inference provides the full posterior for any signal strength, but it is too expensive to repeat for a representative set of noises. Here I describe an efficient semianalytical technique to map the exact sampling distribution of the maximum-likelihood estimator across noise realizations, for any signal strength. This technique can be applied to any estimation problem for signals in additive Gaussian noise. PMID:22181593

  14. Semiparametric Estimation of the Impacts of Longitudinal Interventions on Adolescent Obesity using Targeted Maximum-Likelihood: Accessible Estimation with the ltmle Package

    PubMed Central

    Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.

    2015-01-01

    While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated

  15. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  16. The Likelihood Function and Likelihood Statistics

    NASA Astrophysics Data System (ADS)

    Robinson, Edward L.

    2016-01-01

    The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.

  17. A Method and On-Line Tool for Maximum Likelihood Calibration of Immunoblots and Other Measurements That Are Quantified in Batches.

    PubMed

    Andrews, Steven S; Rutherford, Suzannah

    2016-01-01

    Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a "1-step calibration method" reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional "2-step" method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use.

  18. A maximum likelihood estimator for bedrock fracture transmissivities and its application to the analysis and design of borehole hydraulic tests

    NASA Astrophysics Data System (ADS)

    West, Anthony C. F.; Novakowski, Kent S.; Gazor, Saeed

    2006-06-01

    We propose a new method to estimate the transmissivities of bedrock fractures from transmissivities measured in intervals of fixed length along a borehole. We define the scale of a fracture set by the inverse of the density of the Poisson point process assumed to represent their locations along the borehole wall, and we assume a lognormal distribution for their transmissivities. The parameters of the latter distribution are estimated by maximizing the likelihood of a left-censored subset of the data where the degree of censorship depends on the scale of the considered fracture set. We applied the method to sets of interval transmissivities simulated by summing random fracture transmissivities drawn from a specified population. We found the estimated distributions compared well to the transmissivity distributions of similarly scaled subsets of the most transmissive fractures from among the specified population. Estimation accuracy was most sensitive to the variance in the transmissivities of the fracture population. Using the proposed method, we estimated the transmissivities of fractures at increasing scale from hydraulic test data collected at a fixed scale in Smithville, Ontario, Canada. This is an important advancement since the resultant curves of transmissivity parameters versus fracture set scale would only previously have been obtainable from hydraulic tests conducted with increasing test interval length and with degrading equipment precision. Finally, on the basis of the properties of the proposed method, we propose guidelines for the design of fixed interval length hydraulic testing programs that require minimal prior knowledge of the rock.

  19. Phylogenetic systematics and biogeography of hummingbirds: Bayesian and maximum likelihood analyses of partitioned data and selection of an appropriate partitioning strategy.

    PubMed

    McGuire, Jimmy A; Witt, Christopher C; Altshuler, Douglas L; Remsen, J V

    2007-10-01

    Hummingbirds are an important model system in avian biology, but to date the group has been the subject of remarkably few phylogenetic investigations. Here we present partitioned Bayesian and maximum likelihood phylogenetic analyses for 151 of approximately 330 species of hummingbirds and 12 outgroup taxa based on two protein-coding mitochondrial genes (ND2 and ND4), flanking tRNAs, and two nuclear introns (AK1 and BFib). We analyzed these data under several partitioning strategies ranging between unpartitioned and a maximum of nine partitions. In order to select a statistically justified partitioning strategy following partitioned Bayesian analysis, we considered four alternative criteria including Bayes factors, modified versions of the Akaike information criterion for small sample sizes (AIC(c)), Bayesian information criterion (BIC), and a decision-theoretic methodology (DT). Following partitioned maximum likelihood analyses, we selected a best-fitting strategy using hierarchical likelihood ratio tests (hLRTS), the conventional AICc, BIC, and DT, concluding that the most stringent criterion, the performance-based DT, was the most appropriate methodology for selecting amongst partitioning strategies. In the context of our well-resolved and well-supported phylogenetic estimate, we consider the historical biogeography of hummingbirds using ancestral state reconstructions of (1) primary geographic region of occurrence (i.e., South America, Central America, North America, Greater Antilles, Lesser Antilles), (2) Andean or non-Andean geographic distribution, and (3) minimum elevational occurrence. These analyses indicate that the basal hummingbird assemblages originated in the lowlands of South America, that most of the principle clades of hummingbirds (all but Mountain Gems and possibly Bees) originated on this continent, and that there have been many (at least 30) independent invasions of other primary landmasses, especially Central America.

  20. Shading Beats Binocular Disparity in Depth from Luminance Gradients: Evidence against a Maximum Likelihood Principle for Cue Combination.

    PubMed

    Chen, Chien-Chung; Tyler, Christopher William

    2015-01-01

    Perceived depth is conveyed by multiple cues, including binocular disparity and luminance shading. Depth perception from luminance shading information depends on the perceptual assumption for the incident light, which has been shown to default to a diffuse illumination assumption. We focus on the case of sinusoidally corrugated surfaces to ask how shading and disparity cues combine defined by the joint luminance gradients and intrinsic disparity modulation that would occur in viewing the physical corrugation of a uniform surface under diffuse illumination. Such surfaces were simulated with a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0'-20'. The observers' task was to adjust the binocular disparity of a comparison random-dot stereogram surface to match the perceived depth of the joint luminance/disparity-modulated corrugation target. Regardless of target spatial frequency, the perceived target depth increased with the luminance contrast and depended on luminance phase but was largely unaffected by the luminance disparity modulation. These results validate the idea that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. For depth judgments with combined cues, the observers gave much greater weighting to the luminance shading than to the disparity modulation of the targets. The results were not well-fit by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue in isolation. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios.

  1. Estimating epidemiological parameters for bovine tuberculosis in British cattle using a Bayesian partial-likelihood approach.

    PubMed

    O'Hare, A; Orton, R J; Bessell, P R; Kao, R R

    2014-05-22

    Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems.

  2. Estimating epidemiological parameters for bovine tuberculosis in British cattle using a Bayesian partial-likelihood approach

    PubMed Central

    O'Hare, A.; Orton, R. J.; Bessell, P. R.; Kao, R. R.

    2014-01-01

    Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems. PMID:24718762

  3. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887

  4. Orders of magnitude extension of the effective dynamic range of TDC-based TOFMS data through maximum likelihood estimation.

    PubMed

    Ipsen, Andreas; Ebbels, Timothy M D

    2014-10-01

    In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.

  5. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. PMID:27282749

  6. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    SciTech Connect

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  7. Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach

    NASA Astrophysics Data System (ADS)

    Hehn, L.; Armengaud, E.; Arnaud, Q.; Augier, C.; Benoît, A.; Bergé, L.; Billard, J.; Blümer, J.; de Boissière, T.; Broniatowski, A.; Camus, P.; Cazes, A.; Chapellier, M.; Charlieux, F.; De Jésus, M.; Dumoulin, L.; Eitel, K.; Foerster, N.; Gascon, J.; Giuliani, A.; Gros, M.; Heuermann, G.; Jin, Y.; Juillard, A.; Kéfélian, C.; Kleifges, M.; Kozlov, V.; Kraus, H.; Kudryavtsev, V. A.; Le-Sueur, H.; Marnieros, S.; Navick, X.-F.; Nones, C.; Olivieri, E.; Pari, P.; Paul, B.; Piro, M.-C.; Poda, D.; Queguiner, E.; Rozov, S.; Sanglard, V.; Schmidt, B.; Scorza, S.; Siebenborn, B.; Tcherniakhovski, D.; Vagneron, L.; Weber, M.; Yakushev, E.

    2016-10-01

    We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m_{χ } in [4, 30] GeV/c^2 with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ - and β -radiation, recoils from ^{206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m_χ = 4 GeV/c^2 is 1.6 × 10^{-39} cm^2, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c^2 the exclusion limits found with both analyses are in good agreement.

  8. User's guide: Nimbus-7 Earth radiation budget narrow-field-of-view products. Scene radiance tape products, sorting into angular bins products, and maximum likelihood cloud estimation products

    NASA Technical Reports Server (NTRS)

    Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard

    1990-01-01

    The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.

  9. Bayesian clustering of fuzzy feature vectors using a quasi-likelihood approach.

    PubMed

    Marttinen, Pekka; Tang, Jing; De Baets, Bernard; Dawyndt, Peter; Corander, Jukka

    2009-01-01

    Bayesian model-based classifiers, both unsupervised and supervised, have been studied extensively and their value and versatility have been demonstrated on a wide spectrum of applications within science and engineering. A majority of the classifiers are built on the assumption of intrinsic discreteness of the considered data features or on the discretization of them prior to the modeling. On the other hand, Gaussian mixture classifiers have also been utilized to a large extent for continuous features in the Bayesian framework. Often the primary reason for discretization in the classification context is the simplification of the analytical and numerical properties of the models. However, the discretization can be problematic due to its \\textit{ad hoc} nature and the decreased statistical power to detect the correct classes in the resulting procedure. We introduce an unsupervised classification approach for fuzzy feature vectors that utilizes a discrete model structure while preserving the continuous characteristics of data. This is achieved by replacing the ordinary likelihood by a binomial quasi-likelihood to yield an analytical expression for the posterior probability of a given clustering solution. The resulting model can be justified from an information-theoretic perspective. Our method is shown to yield highly accurate clusterings for challenging synthetic and empirical data sets. PMID:19029547

  10. Triadic conceptual structure of the maximum entropy approach to evolution.

    PubMed

    Herrmann-Pillath, Carsten; Salthe, Stanley N

    2011-03-01

    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law. PMID:21055440

  11. Triadic conceptual structure of the maximum entropy approach to evolution.

    PubMed

    Herrmann-Pillath, Carsten; Salthe, Stanley N

    2011-03-01

    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.

  12. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing

  13. Approach to forecasting daily maximum ozone levels in St. Louis

    NASA Technical Reports Server (NTRS)

    Prior, E. J.; Schiess, J. R.; Mcdougal, D. S.

    1981-01-01

    Measurements taken in 1976 from the St. Louis Regional Air Pollution Study (RAPS) data base, conducted by EPA, were analyzed to determine an optimum set of air-quality and meteorological variables for predicting maximum ozone levels for each day in 1976. A 'leaps and bounds' regression analysis was used to identify the best subset of variables. Three particular variables, the 9 a.m. ozone level, the forecasted maximum temperature, and the 6-9 a.m. averaged wind speed, have useful forecasting utility. The trajectory history of air masses entering St. Louis was studied, and it was concluded that transport-related variables contribute to the appearance of very high ozone levels. The final empirical forecast model predicts the daily maximum ozone over 341 days with a standard deviation of 11 ppb, which approaches the estimated error.

  14. A Penalized Likelihood Approach for Investigating Gene-Drug Interactions in Pharmacogenetic Studies

    PubMed Central

    Neely, Megan L.; Bondell, Howard D.; Tzeng, Jung-Ying

    2015-01-01

    Summary Pharmacogenetics investigates the relationship between heritable genetic variation and the variation in how individuals respond to drug therapies. Often, gene-drug interactions play a primary role in this response, and identifying these effects can aid in the development of individualized treatment regimes. Haplotypes can hold key information in understanding the association between genetic variation and drug response. However, the standard approach for haplotype-based association analysis does not directly address the research questions dictated by individualized medicine. A complementary post-hoc analysis is required, and this post-hoc analysis is usually under powered after adjusting for multiple comparisons and may lead to seemingly contradictory conclusions. In this work, we propose a penalized likelihood approach that is able to overcome the drawbacks of the standard approach and yield the desired personalized output. We demonstrate the utility of our method by applying it to the Scottish Randomized Trial in Ovarian Cancer. We also conducted simulation studies and showed that the proposed penalized method has comparable or more power than the standard approach and maintains low Type I error rates for both binary and quantitative drug responses. The largest performance gains are seen when the haplotype frequency is low, the difference in effect sizes are small, or the true relationship among the drugs is more complex. PMID:25604216

  15. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    PubMed Central

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  16. Maximum-Likelihood Joint Image Reconstruction/Motion Estimation in Attenuation-Corrected Respiratory Gated PET/CT Using a Single Attenuation Map.

    PubMed

    Bousse, Alexandre; Bertolli, Ottavia; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F; Thielemans, Kris

    2016-01-01

    This work provides an insight into positron emission tomography (PET) joint image reconstruction/motion estimation (JRM) by maximization of the likelihood, where the probabilistic model accounts for warped attenuation. Our analysis shows that maximum-likelihood (ML) JRM returns the same reconstructed gates for any attenuation map (μ-map) that is a deformation of a given μ-map, regardless of its alignment with the PET gates. We derived a joint optimization algorithm accordingly, and applied it to simulated and patient gated PET data. We first evaluated the proposed algorithm on simulations of respiratory gated PET/CT data based on the XCAT phantom. Our results show that independently of which μ-map is used as input to JRM: (i) the warped μ-maps correspond to the gated μ-maps, (ii) JRM outperforms the traditional post-registration reconstruction and consolidation (PRRC) for hot lesion quantification and (iii) reconstructed gated PET images are similar to those obtained with gated μ-maps. This suggests that a breath-held μ-map can be used. We then applied JRM on patient data with a μ-map derived from a breath-held high resolution CT (HRCT), and compared the results with PRRC, where each reconstructed PET image was obtained with a corresponding cine-CT gated μ-map. Results show that JRM with breath-held HRCT achieves similar reconstruction to that using PRRC with cine-CT. This suggests a practical low-dose solution for implementation of motion-corrected respiratory gated PET/CT.

  17. Assessing the Heritability of Anorexia Nervosa Symptoms Using a Marginal Maximal Likelihood Approach

    PubMed Central

    Mazzeo, Suzanne E.; Mitchell, Karen S.; Bulik, Cynthia M.; Reichborn-Kjennerud, Ted; Kendler, Kenneth S.; Neale, Michael C.

    2008-01-01

    Background Assessment of eating disorders at the symptom level can facilitate the refinement of phenotypes. We examined genetic and environmental contributions to liability to anorexia nervosa (AN) symptoms in a population-based twin sample using a genetic common pathway model. Method Participants were from the Norwegian Institute of Public Health Twin Panel and included all female monozygotic (n = 448 complete pairs and 4 singletons) and dizygotic (n = 263 complete pairs and 4 singletons) twins who completed the Composite International Diagnostic Interview assessing DSM-IV axis I and ICD-10 criteria. Responses to items assessing AN symptoms were included in a model fitted using marginal maximum likelihood. Results Heritability of the overall AN diagnosis was moderate (a2 = .22, 95% CI: 0.0; .50), whereas heritabilities of the specific items varied. Heritability estimates for weight loss items were moderate (a2 estimates ranged from .31 to .34) and items assessing weight concern when at a low weight were smaller (ranging from .18 to .29). Additive genetic factors contributed little to the variance of amenorrhea, which was most strongly influenced by unshared environment (a2 =.16; e2 = .71). Conclusions AN symptoms are differentially heritable. Specific criteria such as those related to body weight and weight loss history represent more biologically driven potential endophenotypes or liability indices. Results regarding weight concern differ somewhat from those of previous studies, which highlights the importance of assessing genetic and environmental influences on variance of traits within specific subgroups of interest. PMID:18485259

  18. A Likelihood-Based Approach to Identifying Contaminated Food Products Using Sales Data: Performance and Challenges

    PubMed Central

    Kaufman, James; Lessler, Justin; Harry, April; Edlund, Stefan; Hu, Kun; Douglas, Judith; Thoens, Christian; Appel, Bernd; Käsbohrer, Annemarie; Filter, Matthias

    2014-01-01

    Foodborne disease outbreaks of recent years demonstrate that due to increasingly interconnected supply chains these type of crisis situations have the potential to affect thousands of people, leading to significant healthcare costs, loss of revenue for food companies, and—in the worst cases—death. When a disease outbreak is detected, identifying the contaminated food quickly is vital to minimize suffering and limit economic losses. Here we present a likelihood-based approach that has the potential to accelerate the time needed to identify possibly contaminated food products, which is based on exploitation of food products sales data and the distribution of foodborne illness case reports. Using a real world food sales data set and artificially generated outbreak scenarios, we show that this method performs very well for contamination scenarios originating from a single “guilty” food product. As it is neither always possible nor necessary to identify the single offending product, the method has been extended such that it can be used as a binary classifier. With this extension it is possible to generate a set of potentially “guilty” products that contains the real outbreak source with very high accuracy. Furthermore we explore the patterns of food distributions that lead to “hard-to-identify” foods, the possibility of identifying these food groups a priori, and the extent to which the likelihood-based method can be used to quantify uncertainty. We find that high spatial correlation of sales data between products may be a useful indicator for “hard-to-identify” products. PMID:24992565

  19. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience.

  20. Simultaneous Maximum-Likelihood Reconstruction of Absorption Coefficient, Refractive Index and Dark-Field Scattering Coefficient in X-Ray Talbot-Lau Tomography

    PubMed Central

    Ritter, André; Anton, Gisela; Weber, Thomas

    2016-01-01

    A maximum-likelihood reconstruction technique for X-ray Talbot-Lau tomography is presented. This technique allows the iterative simultaneous reconstruction of discrete distributions of absorption coefficient, refractive index and a dark-field scattering coefficient. This technique avoids prior phase retrieval in the tomographic projection images and thus in principle allows reconstruction from tomographic data with less than three phase steps per projection. A numerical phantom is defined which is used to evaluate convergence of the technique with regard to photon statistics and with regard to the number of projection angles and phase steps used. It is shown that the use of a random phase sampling pattern allows the reconstruction even for the extreme case of only one single phase step per projection. The technique is successfully applied to measured tomographic data of a mouse. In future, this reconstruction technique might also be used to implement enhanced imaging models for X-ray Talbot-Lau tomography. These enhancements might be suited to correct for example beam hardening and dispersion artifacts and improve overall image quality of X-ray Talbot-Lau tomography. PMID:27695126

  1. A Study on Along-Track and Cross-Track Noise of Altimetry Data by Maximum Likelihood: Mars Orbiter Laser Altimetry (Mola) Example

    NASA Astrophysics Data System (ADS)

    Jarmołowski, Wojciech; Łukasiak, Jacek

    2015-12-01

    The work investigates the spatial correlation of the data collected along orbital tracks of Mars Orbiter Laser Altimeter (MOLA) with a special focus on the noise variance problem in the covariance matrix. The problem of different correlation parameters in along-track and crosstrack directions of orbital or profile data is still under discussion in relation to Least Squares Collocation (LSC). Different spacing in along-track and transverse directions and anisotropy problem are frequently considered in the context of this kind of data. Therefore the problem is analyzed in this work, using MOLA data samples. The analysis in this paper is focused on a priori errors that correspond to the white noise present in the data and is performed by maximum likelihood (ML) estimation in two, perpendicular directions. Additionally, correlation lengths of assumed planar covariance model are determined by ML and by fitting it into the empirical covariance function (ECF). All estimates considered together confirm substantial influence of different data resolution in along-track and transverse directions on the covariance parameters.

  2. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  3. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics

    PubMed Central

    2010-01-01

    Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2

  4. Accuracy of land use change detection using support vector machine and maximum likelihood techniques for open-cast coal mining areas.

    PubMed

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan

    2016-08-01

    One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment. PMID:27461425

  5. Wavelet filtering of time-series moderate resolution imaging spectroradiometer data for rice crop mapping using support vector machines and maximum likelihood classifier

    NASA Astrophysics Data System (ADS)

    Chen, Chi-Farn; Son, Nguyen-Thanh; Chen, Cheng-Ru; Chang, Ly-Yu

    2011-01-01

    Rice is the most important economic crop in Vietnam's Mekong Delta (MD). It is the main source of employment and income for rural people in this region. Yearly estimates of rice growing areas and delineation of spatial distribution of rice crops are needed to devise agricultural economic plans and to ensure security of the food supply. The main objective of this study is to map rice cropping systems with respect to monitoring agricultural practices in the MD using time-series moderate resolution imaging spectroradiometer (MODIS) normalized difference vegetation index (NDVI) 250-m data. These time-series NDVI data were derived from the 8-day MODIS 250-m data acquired in 2008. Various spatial and nonspatial data were also used for accuracy verification. The method used in this study consists of the following three main steps: 1. filtering noise from the time-series NDVI data using wavelet transformation (Coiflet 4); 2. classification of rice cropping systems using parametric and nonparametric classification algorithms: the maximum likelihood classifier (MLC) and support vector machines (SVMs); and 3. verification of classification results using ground truth data and government rice statistics. Good results can be found using wavelet transformation for cleaning rice signatures. The results of classification accuracy assessment showed that the SVMs outperformed the MLC. The overall accuracy and Kappa coefficient achieved by the SVMs were 89.7% and 0.86, respectively, while those achieved by the MLC were 76.2% and 0.68, respectively. Comparison of the MODIS-derived areas obtained by the SVMs with the government rice statistics at the provincial level also demonstrated that the results achieved by the SVMs (R2 = 0.95) were better than the MLC (R2 = 0.91). This study demonstrates the effectiveness of using a nonparametric classification algorithm (SVMs) and time-series MODIS NVDI data for rice crop mapping in the Vietnamese MD.

  6. Accuracy of land use change detection using support vector machine and maximum likelihood techniques for open-cast coal mining areas.

    PubMed

    Karan, Shivesh Kishore; Samadder, Sukha Ranjan

    2016-08-01

    One objective of the present study was to evaluate the performance of support vector machine (SVM)-based image classification technique with the maximum likelihood classification (MLC) technique for a rapidly changing landscape of an open-cast mine. The other objective was to assess the change in land use pattern due to coal mining from 2006 to 2016. Assessing the change in land use pattern accurately is important for the development and monitoring of coalfields in conjunction with sustainable development. For the present study, Landsat 5 Thematic Mapper (TM) data of 2006 and Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) data of 2016 of a part of Jharia Coalfield, Dhanbad, India, were used. The SVM classification technique provided greater overall classification accuracy when compared to the MLC technique in classifying heterogeneous landscape with limited training dataset. SVM exceeded MLC in handling a difficult challenge of classifying features having near similar reflectance on the mean signature plot, an improvement of over 11 % was observed in classification of built-up area, and an improvement of 24 % was observed in classification of surface water using SVM; similarly, the SVM technique improved the overall land use classification accuracy by almost 6 and 3 % for Landsat 5 and Landsat 8 images, respectively. Results indicated that land degradation increased significantly from 2006 to 2016 in the study area. This study will help in quantifying the changes and can also serve as a basis for further decision support system studies aiding a variety of purposes such as planning and management of mines and environmental impact assessment.

  7. Maximum Likelihood Factor Analysis of the Effects of Chronic Centrifugation on the Structural Development of the Musculoskeletal System of the Rat

    NASA Technical Reports Server (NTRS)

    Amtmann, E.; Kimura, T.; Oyama, J.; Doden, E.; Potulski, M.

    1979-01-01

    At the age of 30 days female Sprague-Dawley rats were placed on a 3.66 m radius centrifuge and subsequently exposed almost continuously for 810 days to either 2.76 or 4.15 G. An age-matched control group of rats was raised near the centrifuge facility at earth gravity. Three further control groups of rats were obtained from the animal colony and sacrificed at the age of 34, 72 and 102 days. A total of 16 variables were simultaneously factor analyzed by maximum-likelihood extraction routine and the factor loadings presented after-rotation to simple structure by a varimax rotation routine. The variables include the G-load, age, body mass, femoral length and cross-sectional area, inner and outer radii, density and strength at the mid-length of the femur, dry weight of gluteus medius, semimenbranosus and triceps surae muscles. Factor analyses on A) all controls, B) all controls and the 2.76 G group, and C) all controls and centrifuged animals, produced highly similar loading structures of three common factors which accounted for 74%, 68% and 68%. respectively, of the total variance. The 3 factors were interpreted as: 1. An age and size factor which stimulates the growth in length and diameter and increases the density and strength of the femur. This factor is positively correlated with G-load but is also active in the control animals living at earth gravity. 2. A growth inhibition factor which acts on body size, femoral length and on both the outer and inner radius at mid-length of the femur. This factor is intensified by centrifugation.

  8. Maximum likelihood estimate of life expectancy in the prehistoric Jomon: Canine pulp volume reduction suggests a longer life expectancy than previously thought.

    PubMed

    Sasaki, Tomohiko; Kondo, Osamu

    2016-09-01

    Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years.

  9. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  10. Final Report for Dynamic Models for Causal Analysis of Panel Data. Quality of Maximum Likelihood Estimates of Parameters in a Log-Linear Rate Model. Part III, Chapter 3.

    ERIC Educational Resources Information Center

    Fennell, Mary L.; And Others

    This document is part of a series of chapters described in SO 011 759. This chapter reports the results of Monte Carlo simulations designed to analyze problems of using maximum likelihood estimation (MLE: see SO 011 767) in research models which combine longitudinal and dynamic behavior data in studies of change. Four complications--censoring of…

  11. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  12. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    SciTech Connect

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  13. Numerical likelihood analysis of cosmic ray anisotropies

    SciTech Connect

    Carlos Hojvat et al.

    2003-07-02

    A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

  14. An Adjusted Likelihood Ratio Approach Analysing Distribution of Food Products to Assist the Investigation of Foodborne Outbreaks

    PubMed Central

    Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter

    2015-01-01

    In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468

  15. Novel applications using maximum-likelihood estimation in optical metrology and nuclear medical imaging: Point-diffraction interferometry and BazookaPET

    NASA Astrophysics Data System (ADS)

    Park, Ryeojin

    This dissertation aims to investigate two different applications in optics using maximum-likelihood (ML) estimation. The first application of ML estimation is used in optical metrology. For this application, an innovative iterative search method called the synthetic phase-shifting (SPS) algorithm is proposed. This search algorithm is used for estimation of a wavefront that is described by a finite set of Zernike Fringe (ZF) polynomials. In this work, we estimate the ZF coefficient, or parameter values of the wavefront using a single interferogram obtained from a point-diffraction interferometer (PDI). In order to find the estimates, we first calculate the squared-difference between the measured and simulated interferograms. Under certain assumptions, this squared-difference image can be treated as an interferogram showing the phase difference between the true wavefront deviation and simulated wavefront deviation. The wavefront deviation is defined as the difference between the reference and the test wavefronts. We calculate the phase difference using a traditional phase-shifting technique without physical phase-shifters. We present a detailed forward model for the PDI interferogram, including the effect of the nite size of a detector pixel. The algorithm was validated with computational studies and its performance and constraints are discussed. A prototype PDI was built and the algorithm was also experimentally validated. A large wavefront deviation was successfully estimated without using null optics or physical phase-shifters. The experimental result shows that the proposed algorithm has great potential to provide an accurate tool for non-null testing. The second application of ML estimation is used in nuclear medical imaging. A high-resolution positron tomography scanner called BazookaPET is proposed. We have designed and developed a novel proof-of-concept detector element for a PET system called BazookaPET. In order to complete the PET configuration, at least

  16. Evaluation of infrared spectra analyses using a likelihood ratio approach: A practical example of spray paint examination.

    PubMed

    Muehlethaler, Cyril; Massonnet, Geneviève; Hicks, Tacha

    2016-03-01

    Depending on the forensic disciplines and on the analytical techniques used, Bayesian methods of evaluation have been applied both as a two-step approach (first comparison, then evaluation) and as a continuous approach (comparison and evaluation in one step). However in order to use the continuous approach, the measurements have to be reliably summarized as a numerical value linked to the property of interest, which occurrence can be determined (e.g., refractive index measurement of glass samples). For paint traces analyzed by Fourier transform infrared spectroscopy (FTIR) however, the statistical comparison of the spectra is generally done by a similarity measure (e.g., Pearson correlation, Euclidean distance). Although useful, these measures cannot be directly associated to frequencies of occurrence of the chemical composition (binders, extenders, pigments). The continuous approach as described above is not possible, and a two-step evaluation, 1) comparison of the spectra and 2) evaluation of the results, is therefore the common practice reported in most of the laboratories. Derived from a practical question that arose during casework, a way of integrating the similarity measure between spectra into a continuous likelihood ratio formula was explored. This article proposes the use of a likelihood ratio approach with the similarity measure of infrared spectra of spray paints based on distributions of sub-populations given by the color and composition of spray paint cans. Taking into account not only the rarity of paint composition, but also the "quality" of the analytical match provides a more balanced evaluation given source or activity level propositions. We will demonstrate also that a joint statistical-expertal methodology allows for a more transparent evaluation of the results and makes a better use of current knowledge.

  17. MAXIMUM LIKELIHOOD FITTING OF X-RAY POWER DENSITY SPECTRA: APPLICATION TO HIGH-FREQUENCY QUASI-PERIODIC OSCILLATIONS FROM THE NEUTRON STAR X-RAY BINARY 4U1608-522

    SciTech Connect

    Barret, Didier; Vaughan, Simon

    2012-02-20

    High-frequency quasi-periodic oscillations (QPOs) from weakly magnetized neutron stars display rapid frequency variability (second timescales) and high coherence with quality factors up to at least 200 at frequencies about 800-850 Hz. Their parameters have been estimated so far from standard min({chi}{sup 2}) fitting techniques, after combining a large number of power density spectra (PDS), to have the powers normally distributed (the so-called Gaussian regime). Before combining PDS, different methods to minimize the effects of the frequency drift to the estimates of the QPO parameters have been proposed, but none of them relied on fitting the individual PDS. Accounting for the statistical properties of PDS, we apply a maximum likelihood method to derive the QPO parameters in the non-Gaussian regime. The method presented is general, easy to implement, and can be applied to fitting individual PDS, several PDS simultaneously, or their average, and is obviously not specific to the analysis of kHz QPO data. It applies to the analysis of any PDS optimized in frequency resolution and for low-frequency variability or PDS containing features whose parameters vary on short timescales, as is the case for kHz QPOs. It is equivalent to the standard {chi}{sup 2} minimization fitting when the number of PDS fitted is large. The accuracy, reliability, and superiority of the method is demonstrated with simulations of synthetic PDS, containing Lorentzian QPOs of known parameters. Accounting for the broadening of the QPO profile, due to the leakage of power inherent to windowed Fourier transforms, the maximum likelihood estimates of the QPO parameters are asymptotically unbiased and have negligible bias when the QPO is reasonably well detected. By contrast, we show that the standard min({chi}{sup 2}) fitting method gives biased parameters with larger uncertainties. The maximum likelihood fitting method is applied to a subset of archival Rossi X-ray Timing Explorer data of the neutron

  18. Likelihood-based estimation of the effective population size using temporal changes in allele frequencies: a genealogical approach.

    PubMed Central

    Berthier, Pierre; Beaumont, Mark A; Cornuet, Jean-Marie; Luikart, Gordon

    2002-01-01

    A new genetic estimator of the effective population size (N(e)) is introduced. This likelihood-based (LB) estimator uses two temporally spaced genetic samples of individuals from a population. We compared its performance to that of the classical F-statistic-based N(e) estimator (N(eFk)) by using data from simulated populations with known N(e) and real populations. The new likelihood-based estimator (N(eLB)) showed narrower credible intervals and greater accuracy than (N(eFk)) when genetic drift was strong, but performed only slightly better when genetic drift was relatively weak. When drift was strong (e.g., N(e) = 20 for five generations), as few as approximately 10 loci (heterozygosity of 0.6; samples of 30 individuals) are sufficient to consistently achieve credible intervals with an upper limit <50 using the LB method. In contrast, approximately 20 loci are required for the same precision when using the classical F-statistic approach. The N(eLB) estimator is much improved over the classical method when there are many rare alleles. It will be especially useful in conservation biology because it less often overestimates N(e) than does N(eLB) and thus is less likely to erroneously suggest that a population is large and has a low extinction risk. PMID:11861575

  19. Probabilistic evaluation of n traces with no putative source: A likelihood ratio based approach in an investigative framework.

    PubMed

    De March, I; Sironi, E; Taroni, F

    2016-09-01

    Analysis of marks recovered from different crime scenes can be useful to detect a linkage between criminal cases, even though a putative source for the recovered traces is not available. This particular circumstance is often encountered in the early stage of investigations and thus, the evaluation of evidence association may provide useful information for the investigators. This association is evaluated here from a probabilistic point of view: a likelihood ratio based approach is suggested in order to quantify the strength of the evidence of trace association in the light of two mutually exclusive propositions, namely that the n traces come from a common source or from an unspecified number of sources. To deal with this kind of problem, probabilistic graphical models are used, in form of Bayesian networks and object-oriented Bayesian networks, allowing users to intuitively handle with uncertainty related to the inferential problem. PMID:27490842

  20. Monte Carlo studies of ocean wind vector measurements by SCATT: Objective criteria and maximum likelihood estimates for removal of aliases, and effects of cell size on accuracy of vector winds

    NASA Technical Reports Server (NTRS)

    Pierson, W. J.

    1982-01-01

    The scatterometer on the National Oceanic Satellite System (NOSS) is studied by means of Monte Carlo techniques so as to determine the effect of two additional antennas for alias (or ambiguity) removal by means of an objective criteria technique and a normalized maximum likelihood estimator. Cells nominally 10 km by 10 km, 10 km by 50 km, and 50 km by 50 km are simulated for winds of 4, 8, 12 and 24 m/s and incidence angles of 29, 39, 47, and 53.5 deg for 15 deg changes in direction. The normalized maximum likelihood estimate (MLE) is correct a large part of the time, but the objective criterion technique is recommended as a reserve, and more quickly computed, procedure. Both methods for alias removal depend on the differences in the present model function at upwind and downwind. For 10 km by 10 km cells, it is found that the MLE method introduces a correlation between wind speed errors and aspect angle (wind direction) errors that can be as high as 0.8 or 0.9 and that the wind direction errors are unacceptably large, compared to those obtained for the SASS for similar assumptions.

  1. A Simple 2D Non-Parametric Resampling Statistical Approach to Assess Confidence in Species Identification in DNA Barcoding—An Alternative to Likelihood and Bayesian Approaches

    PubMed Central

    Jin, Qian; He, Li-Jun; Zhang, Ai-Bing

    2012-01-01

    In the recent worldwide campaign for the global biodiversity inventory via DNA barcoding, a simple and easily used measure of confidence for assigning sequences to species in DNA barcoding has not been established so far, although the likelihood ratio test and the Bayesian approach had been proposed to address this issue from a statistical point of view. The TDR (Two Dimensional non-parametric Resampling) measure newly proposed in this study offers users a simple and easy approach to evaluate the confidence of species membership in DNA barcoding projects. We assessed the validity and robustness of the TDR approach using datasets simulated under coalescent models, and an empirical dataset, and found that TDR measure is very robust in assessing species membership of DNA barcoding. In contrast to the likelihood ratio test and Bayesian approach, the TDR method stands out due to simplicity in both concepts and calculations, with little in the way of restrictive population genetic assumptions. To implement this approach we have developed a computer program package (TDR1.0beta) freely available from ftp://202.204.209.200/education/video/TDR1.0beta.rar. PMID:23239988

  2. Searching for first-degree familial relationships in California's offender DNA database: validation of a likelihood ratio-based approach.

    PubMed

    Myers, Steven P; Timken, Mark D; Piucci, Matthew L; Sims, Gary A; Greenwald, Michael A; Weigand, James J; Konzak, Kenneth C; Buoncristiani, Martin R

    2011-11-01

    A validation study was performed to measure the effectiveness of using a likelihood ratio-based approach to search for possible first-degree familial relationships (full-sibling and parent-child) by comparing an evidence autosomal short tandem repeat (STR) profile to California's ∼1,000,000-profile State DNA Index System (SDIS) database. Test searches used autosomal STR and Y-STR profiles generated for 100 artificial test families. When the test sample and the first-degree relative in the database were characterized at the 15 Identifiler(®) (Applied Biosystems(®), Foster City, CA) STR loci, the search procedure included 96% of the fathers and 72% of the full-siblings. When the relative profile was limited to the 13 Combined DNA Index System (CODIS) core loci, the search procedure included 93% of the fathers and 61% of the full-siblings. These results, combined with those of functional tests using three real families, support the effectiveness of this tool. Based upon these results, the validated approach was implemented as a key, pragmatic and demonstrably practical component of the California Department of Justice's Familial Search Program. An investigative lead created through this process recently led to an arrest in the Los Angeles Grim Sleeper serial murders.

  3. A likelihood-based approach for assessment of extra-pair paternity and conspecific brood parasitism in natural populations

    USGS Publications Warehouse

    Lemons, Patrick R.; Marshall, T.C.; McCloskey, Sarah E.; Sethi, S.A.; Schmutz, Joel A.; Sedinger, James S.

    2015-01-01

    Genotypes are frequently used to assess alternative reproductive strategies such as extra-pair paternity and conspecific brood parasitism in wild populations. However, such analyses are vulnerable to genotyping error or molecular artifacts that can bias results. For example, when using multilocus microsatellite data, a mismatch at a single locus, suggesting the offspring was not directly related to its putative parents, can occur quite commonly even when the offspring is truly related. Some recent studies have advocated an ad-hoc rule that offspring must differ at more than one locus in order to conclude that they are not directly related. While this reduces the frequency with which true offspring are identified as not directly related young, it also introduces bias in the opposite direction, wherein not directly related young are categorized as true offspring. More importantly, it ignores the additional information on allele frequencies which would reduce overall bias. In this study, we present a novel technique for assessing extra-pair paternity and conspecific brood parasitism using a likelihood-based approach in a new version of program cervus. We test the suitability of the technique by applying it to a simulated data set and then present an example to demonstrate its influence on the estimation of alternative reproductive strategies.

  4. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  5. Speech processing using maximum likelihood continuity mapping

    SciTech Connect

    Hogden, J.E.

    2000-04-18

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  6. NON-REGULAR MAXIMUM LIKELIHOOD ESTIMATION

    EPA Science Inventory

    Even though a body of data on the environmental occurrence of medicinal, government-approved ("ethical") pharmaceuticals has been growing over the last two decades (the subject of this book), nearly nothing is known about the disposition of illicit (illegal) drugs in th...

  7. A Monte Carlo comparison of the recovery of winds near upwind and downwind from the SASS-1 model function by means of the sum of squares algorithm and a maximum likelihood estimator

    NASA Technical Reports Server (NTRS)

    Pierson, W. J., Jr.

    1984-01-01

    Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.

  8. Likelihood and clinical trials.

    PubMed

    Hill, G; Forbes, W; Kozak, J; MacNeill, I

    2000-03-01

    The history of the application of statistical theory to the analysis of clinical trials is reviewed. The current orthodoxy is a somewhat illogical hybrid of the original theory of significance tests of Edgeworth, Karl Pearson, and Fisher, and the subsequent decision theory approach of Neyman, Egon Pearson, and Wald. This hegemony is under threat from Bayesian statisticians. A third approach is that of likelihood, stemming from the work of Fisher and Barnard. This approach is illustrated using hypothetical data from the Lancet articles by Bradford Hill, which introduced clinicians to statistical theory. PMID:10760630

  9. Analyzing Trade Dynamics from Incomplete Data in Spatial Regional Models: a Maximum Entropy Approach

    NASA Astrophysics Data System (ADS)

    Papalia, Rosa Bernardini

    2008-11-01

    Flow data are viewed as cross-classified data, and spatial interaction models are reformulated as log-linear models. According to this view, we introduce a spatial panel data model and we derive a Generalized Maximum Entropy—based estimation approach. The estimator has the advantage of being consistent with the underlying data generation process and eventually with the restrictions implied by some non sample information or by past empirical evidence by also controlling for collinearity and endogeneity problems.

  10. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  11. A Maximum-Entropy approach for accurate document annotation in the biomedical domain.

    PubMed

    Tsatsaronis, George; Macari, Natalia; Torge, Sunna; Dietze, Heiko; Schroeder, Michael

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH).The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm's performance is resilient to terms' ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms.

  12. The Work Ratio--modeling the likelihood of return to work for workers with musculoskeletal disorders: A fuzzy logic approach.

    PubMed

    Apalit, Nathan

    2010-01-01

    The world of musculoskeletal disorders (MSDs) is complicated and fuzzy. Fuzzy logic provides a precise framework for complex problems characterized by uncertainty, vagueness and imprecision. Although fuzzy logic would appear to be an ideal modeling language to help address the complexity of MSDs, little research has been done in this regard. The Work Ratio is a novel mathematical model that uses fuzzy logic to provide a numerical and linguistic valuation of the likelihood of return to work and remaining at work. It can be used for a worker with any MSD at any point in time. Basic mathematical concepts from set theory and fuzzy logic are reviewed. A case study is then used to illustrate the use of the Work Ratio. Its potential strengths and limitations are discussed. Further research of its use with a variety of MSDs, settings and multidisciplinary teams is needed to confirm its universal value.

  13. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems

    NASA Astrophysics Data System (ADS)

    Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.

    2016-09-01

    In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.

  14. Frequency-domain localization of alpha rhythm in humans via a maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Patel, Pankaj; Khosla, Deepak; Al-Dayeh, Louai; Singh, Manbir

    1997-05-01

    Generators of spontaneous human brain activity such as alpha rhythm may be easier and more accurate to localize in frequency-domain than in time-domain since these generators are characterized by a specific frequency range. We carried out a frequency-domain analysis of synchronous alpha sources by generating equivalent potential maps using the Fourier transform of each channel of electro-encephalographic (EEG) recordings. SInce the alpha rhythm recorded by EEG scalp measurements is probably produced by several independent generators, a distributed source imaging approach was considered more appropriate than a model based on a single equivalent current dipole. We used an imaging approach based on a Bayesian maximum entropy technique. Reconstructed sources were superposed on corresponding anatomy form magnetic resonance imaging. Results from human studies suggest that reconstructed sources responsible for alpha rhythm are mainly located in the occipital and parieto- occipital lobes.

  15. Generalized maximum entropy approach to quasistationary states in long-range systems

    NASA Astrophysics Data System (ADS)

    Martelloni, Gabriele; Martelloni, Gianluca; de Buyl, Pierre; Fanelli, Duccio

    2016-02-01

    Systems with long-range interactions display a short-time relaxation towards quasistationary states (QSSs) whose lifetime increases with the system size. In the paradigmatic Hamiltonian mean-field model (HMF) out-of-equilibrium phase transitions are predicted and numerically detected which separate homogeneous (zero magnetization) and inhomogeneous (nonzero magnetization) QSSs. In the former regime, the velocity distribution presents (at least) two large, symmetric bumps, which cannot be self-consistently explained by resorting to the conventional Lynden-Bell maximum entropy approach. We propose a generalized maximum entropy scheme which accounts for the pseudoconservation of additional charges, the even momenta of the single-particle distribution. These latter are set to the asymptotic values, as estimated by direct integration of the underlying Vlasov equation, which formally holds in the thermodynamic limit. Methodologically, we operate in the framework of a generalized Gibbs ensemble, as sometimes defined in statistical quantum mechanics, which contains an infinite number of conserved charges. The agreement between theory and simulations is satisfying, both above and below the out-of-equilibrium transition threshold. A previously unaccessible feature of the QSSs, the multiple bumps in the velocity profile, is resolved by our approach.

  16. An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach

    NASA Astrophysics Data System (ADS)

    Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani

    2013-02-01

    TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.

  17. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions. PMID:26282441

  18. Maximum performance synergy: A new approach to recording studio control room design

    NASA Astrophysics Data System (ADS)

    Szymanski, Jeff D.

    2003-10-01

    Popular recording studio control room designs include LEDE(tm), RFZ(tm), and nonenvironment rooms. The common goal of all of these is to create an accurate acoustical environment that does not distort or otherwise color audio reproduction. Also common to these designs is the frequent need to have multiple ancillary recording rooms, often adjacent to the main control room, where group members perform. This approach, where group members are physically separated from one another, can lead to lack of ensemble in the finished recordings. New twists on old acoustical treatment techniques have been implemented at a studio in Nashville, Tennessee, which minimize the need for multiple ancillary recording rooms, thus creating an environment where talent, producer and recording professionals can all occupy the same space for maximum performance synergy. Semi-separated performance areas are designed around a central, critical listening area. The techniques and equipment required to achieve this separation are reviewed, as are advantages and disadvantages to this new control room design approach.

  19. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  20. Diffusion Tensor Estimation by Maximizing Rician Likelihood

    PubMed Central

    Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry

    2012-01-01

    Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors — e.g., fractional anisotropy and apparent diffusion coefficient — to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation. PMID:23132746

  1. Interpretation of FTIR spectra of polymers and Raman spectra of car paints by means of likelihood ratio approach supported by wavelet transform for reducing data dimensionality.

    PubMed

    Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz

    2015-05-01

    The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.

  2. Interpretation of FTIR spectra of polymers and Raman spectra of car paints by means of likelihood ratio approach supported by wavelet transform for reducing data dimensionality.

    PubMed

    Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz

    2015-05-01

    The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information. PMID:25757825

  3. Toxicological approach to setting spacecraft maximum allowable concentrations for carbon monoxide

    NASA Technical Reports Server (NTRS)

    Wong, K. L.; Limero, T. F.; James, J. T.

    1992-01-01

    The Spacecraft Maximum Allowable Concentrations (SMACs) are exposure limits for airborne chemicals used by NASA in spacecraft. The aim of these SMACs is to protect the spacecrew against adverse health effects and performance decrements that would interfere with mission objectives. Because of the 1 and 24 hr SMACs are set for contingencies, minor reversible toxic effects that do not affect mission objectives are acceptable. The 7, 30, or 180 day SMACs are aimed at nominal operations, so they are established at levels that would not cause noncarcinogenic toxic effects and more than one case of tumor per 1000 exposed individuals over the background. The process used to set the SMACs for carbon monoxide (CO) is described to illustrate the approach used by NASA. After the toxicological literature on CO was reviewed, the data were summarized and separated into acute, subchronic, and chronic toxicity data. CO's toxicity depends on the formation of carboxyhemoglobin (COHb) in the blood, reducing the blood's oxygen carrying capacity. The initial task was to estimate the COHb levels that would not produce toxic effects in the brain and heart.

  4. Estimates of the theoretical maximum daily intake of erythorbic acid, gallates, butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) in Italy: a stepwise approach.

    PubMed

    Leclercq, C; Arcella, D; Turrini, A

    2000-12-01

    The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.

  5. A likelihood-based approach for computing the operating characteristics of the 3+3 phase I clinical trial design with extensions to other A+B designs

    PubMed Central

    Chiuzan, Cody; Garrett-Mayer, Elizabeth; Yeatts, Sharon D

    2014-01-01

    , a ‘3+3’ design is often incapable of providing sufficient levels of evidence of acceptability for doses under reasonable scenarios. Limitations Given the small sample size per dose, the levels of evidence are limited in their ability to provide strong evidence favoring either of the hypotheses. Conclusions In many situations, the ‘3+3’ design does not treat enough patients per dose to have confidence in correct dose selection, and the safety of the selected/unselected doses. This likelihood method allows consistent inferences to be made at each dose level, and evidence to be quantified regardless of cohort size. The approach can be used in phase I studies for identifying acceptably safe doses, but also for defining stopping rules in other types of dose-finding designs. PMID:25349178

  6. Constraint likelihood analysis for a network of gravitational wave detectors

    SciTech Connect

    Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.

    2005-12-15

    We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method.

  7. An ecological function and services approach to total maximum daily load (TMDL) prioritization.

    PubMed

    Hall, Robert K; Guiliano, David; Swanson, Sherman; Philbin, Michael J; Lin, John; Aron, Joan L; Schafer, Robin J; Heggem, Daniel T

    2014-04-01

    Prioritizing total maximum daily load (TMDL) development starts by considering the scope and severity of water pollution and risks to public health and aquatic life. Methodology using quantitative assessments of in-stream water quality is appropriate and effective for point source (PS) dominated discharge, but less so in watersheds with mostly nonpoint source (NPS) related impairments. For NPSs, prioritization in TMDL development and implementation of associated best management practices should focus on restoration of ecosystem physical functions, including how restoration effectiveness depends on design, maintenance and placement within the watershed. To refine the approach to TMDL development, regulators and stakeholders must first ask if the watershed, or ecosystem, is at risk of losing riparian or other ecologically based physical attributes and processes. If so, the next step is an assessment of the spatial arrangement of functionality with a focus on the at-risk areas that could be lost, or could, with some help, regain functions. Evaluating stream and wetland riparian function has advantages over the traditional means of water quality and biological assessments for NPS TMDL development. Understanding how an ecosystem functions enables stakeholders and regulators to determine the severity of problem(s), identify source(s) of impairment, and predict and avoid a decline in water quality. The Upper Reese River, Nevada, provides an example of water quality impairment caused by NPS pollution. In this river basin, stream and wetland riparian proper functioning condition (PFC) protocol, water quality data, and remote sensing imagery were used to identify sediment sources, transport, distribution, and its impact on water quality and aquatic resources. This study found that assessments of ecological function could be used to generate leading (early) indicators of water quality degradation for targeting pollution control measures, while traditional in-stream water

  8. A Bayesian approach for calculating variable total maximum daily loads and uncertainty assessment.

    PubMed

    Chen, Dingjiang; Dahlgren, Randy A; Shen, Yena; Lu, Jun

    2012-07-15

    To account for both variability and uncertainty in nonpoint source pollution, one dimensional water quality model was integrated with Bayesian statistics and load duration curve methods to develop a variable total maximum daily load (TMDL) for total nitrogen (TN). Bayesian statistics was adopted to inversely calibrate the unknown parameters in the model, i.e., area-specific export rate (E) and in-stream loss rate coefficient (K) for TN, from the stream monitoring data. Prior distributions for E and K based on published measurements were developed to support Bayesian parameter calibration. Then the resulting E and K values were used in water quality model for simulation of catchment TN export load, TMDL and required load reduction along with their uncertainties in the ChangLe River agricultural watershed in eastern China. Results indicated that the export load, TMDL and required load reduction for TN synchronously increased with increasing stream water discharge. The uncertainties associated with these estimates also presented temporal variability with higher uncertainties for the high flow regime and lower uncertainties for the low flow regime. To assure 90% compliance with the targeted in-stream TN concentration of 2.0mgL(-1), the required load reduction was determined to be 1.7 × 10(3), 4.6 × 10(3), and 14.6 × 10(3)kg TNd (-1) for low, median and high flow regimes, respectively. The integrated modeling approach developed in this study allows decision makers to determine the required load reduction for different TN compliance levels while incorporating both flow-dependent variability and uncertainty assessment to support practical adaptive implementation of TMDL programs.

  9. Maximum Evaporation Rates of Water Droplets Approaching Obstacles in the Atmosphere Under Icing Conditions

    NASA Technical Reports Server (NTRS)

    Lowell, H. H.

    1953-01-01

    When a closed body or a duct envelope moves through the atmosphere, air pressure and temperature rises occur ahead of the body or, under ram conditions, within the duct. If cloud water droplets are encountered, droplet evaporation will result because of the air-temperature rise and the relative velocity between the droplet and stagnating air. It is shown that the solution of the steady-state psychrometric equation provides evaporation rates which are the maximum possible when droplets are entrained in air moving along stagnation lines under such conditions. Calculations are made for a wide variety of water droplet diameters, ambient conditions, and flight Mach numbers. Droplet diameter, body size, and Mach number effects are found to predominate, whereas wide variation in ambient conditions are of relatively small significance in the determination of evaporation rates. The results are essentially exact for the case of movement of droplets having diameters smaller than about 30 microns along relatively long ducts (length at least several feet) or toward large obstacles (wings), since disequilibrium effects are then of little significance. Mass losses in the case of movement within ducts will often be significant fractions (one-fifth to one-half) of original droplet masses, while very small droplets within ducts will often disappear even though the entraining air is not fully stagnated. Wing-approach evaporation losses will usually be of the order of several percent of original droplet masses. Two numerical examples are given of the determination of local evaporation rates and total mass losses in cases involving cloud droplets approaching circular cylinders along stagnation lines. The cylinders chosen were of 3.95-inch (10.0+ cm) diameter and 39.5-inch 100+ cm) diameter. The smaller is representative of icing-rate measurement cylinders, while with the larger will be associated an air-flow field similar to that ahead of an airfoil having a leading-edge radius

  10. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  11. The phylogenetic likelihood library.

    PubMed

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).

  12. Evaluating the efficacies of Maximum Tolerated Dose and metronomic chemotherapies: A mathematical approach

    NASA Astrophysics Data System (ADS)

    Guiraldello, Rafael T.; Martins, Marcelo L.; Mancera, Paulo F. A.

    2016-08-01

    We present a mathematical model based on partial differential equations that is applied to understand tumor development and its response to chemotherapy. Our primary aim is to evaluate comparatively the efficacies of two chemotherapeutic protocols, Maximum Tolerated Dose (MTD) and metronomic, as well as two methods of drug delivery. Concerning therapeutic outcomes, the metronomic protocol proves more effective in prolonging the patient's life than MTD. Moreover, a uniform drug delivery method combined with the metronomic protocol is the most efficient strategy to reduce tumor density.

  13. Reply to ``Comment on `Mobility spectrum computational analysis using a maximum entropy approach' ''

    NASA Astrophysics Data System (ADS)

    Mironov, O. A.; Myronov, M.; Kiatgamolchai, S.; Kantser, V. G.

    2004-03-01

    In their Comment [J. Antoszewski, D. D. Redfern, L. Faraone, J. R. Meyer, I. Vurgaftman, and J. Lindemuth, Phys. Rev. E 69, 038701 (2004)] on our paper [S. Kiatgamolchai, M. Myronov, O. A. Mironov, V. G. Kantser, E. H. C. Parker, and T. E. Whall, Phys. Rev. E 66, 036705 (2002)] the authors present computational results obtained with the improved quantitative mobility spectrum analysis technique implemented in the commercial software of Lake Shore Cryotronics. We suggest that this is just information additional to the mobility spectrum analysis (MSA) in general without any direct relation to our maximum entropy MSA (ME-MSA) algorithm.

  14. What is the best method to fit time-resolved data? A comparison of the residual minimization and the maximum likelihood techniques as applied to experimental time-correlated, single-photon counting data

    DOE PAGESBeta

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A.; Vaswani, Namrata; Petrich, Jacob W.

    2016-02-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less

  15. A Distributed Approach to Maximum Power Point Tracking for Photovoltaic Submodule Differential Power Processing

    SciTech Connect

    Qin, SB; Cady, ST; Dominguez-Garcia, AD; Pilawa-Podgurski, RCN

    2015-04-01

    This paper presents the theory and implementation of a distributed algorithm for controlling differential power processing converters in photovoltaic (PV) applications. This distributed algorithm achieves true maximum power point tracking of series-connected PV submodules by relying only on local voltage measurements and neighbor-to-neighbor communication between the differential power converters. Compared to previous solutions, the proposed algorithm achieves reduced number of perturbations at each step and potentially faster tracking without adding extra hardware; all these features make this algorithm well-suited for long submodule strings. The formulation of the algorithm, discussion of its properties, as well as three case studies are presented. The performance of the distributed tracking algorithm has been verified via experiments, which yielded quantifiable improvements over other techniques that have been implemented in practice. Both simulations and hardware experiments have confirmed the effectiveness of the proposed distributed algorithm.

  16. Bayesian Maximum Entropy Approach to Mapping Soil Moisture at the Field Scale

    NASA Astrophysics Data System (ADS)

    Dong, J.; Ochsner, T.; Cosh, M. H.

    2012-12-01

    The study of soil moisture spatial variability at the field scale is important to aid in modeling hydrological processes at the land surface. The Bayesian Maximum Entropy (BME) framework is a more general method than classical geostatistics and has not yet been applied to soil moisture spatial estimation. This research compares the effectiveness of BME versus kriging estimators for spatial prediction of soil moisture at the field scale. Surface soil moisture surveys were conducted in a 227 ha pasture at the Marena, Oklahoma In Situ Sensor Testbed (MOISST) site. Remotely sensed vegetation data will be incorporated into the soil moisture spatial prediction using the BME method. Soil moisture maps based on the BME and traditional kriging frameworks will be cross-validated and compared.

  17. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  18. Hybrid Evolutionary Approaches to Maximum Lifetime Routing and Energy Efficiency in Sensor Mesh Networks.

    PubMed

    Rahat, Alma A M; Everson, Richard M; Fieldsend, Jonathan E

    2015-01-01

    Mesh network topologies are becoming increasingly popular in battery-powered wireless sensor networks, primarily because of the extension of network range. However, multihop mesh networks suffer from higher energy costs, and the routing strategy employed directly affects the lifetime of nodes with limited energy resources. Hence when planning routes there are trade-offs to be considered between individual and system-wide battery lifetimes. We present a multiobjective routing optimisation approach using hybrid evolutionary algorithms to approximate the optimal trade-off between the minimum lifetime and the average lifetime of nodes in the network. In order to accomplish this combinatorial optimisation rapidly, our approach prunes the search space using k-shortest path pruning and a graph reduction method that finds candidate routes promoting long minimum lifetimes. When arbitrarily many routes from a node to the base station are permitted, optimal routes may be found as the solution to a well-known linear program. We present an evolutionary algorithm that finds good routes when each node is allowed only a small number of paths to the base station. On a real network deployed in the Victoria & Albert Museum, London, these solutions, using only three paths per node, are able to achieve minimum lifetimes of over 99% of the optimum linear program solution's time to first sensor battery failure. PMID:25950392

  19. Hybrid Evolutionary Approaches to Maximum Lifetime Routing and Energy Efficiency in Sensor Mesh Networks.

    PubMed

    Rahat, Alma A M; Everson, Richard M; Fieldsend, Jonathan E

    2015-01-01

    Mesh network topologies are becoming increasingly popular in battery-powered wireless sensor networks, primarily because of the extension of network range. However, multihop mesh networks suffer from higher energy costs, and the routing strategy employed directly affects the lifetime of nodes with limited energy resources. Hence when planning routes there are trade-offs to be considered between individual and system-wide battery lifetimes. We present a multiobjective routing optimisation approach using hybrid evolutionary algorithms to approximate the optimal trade-off between the minimum lifetime and the average lifetime of nodes in the network. In order to accomplish this combinatorial optimisation rapidly, our approach prunes the search space using k-shortest path pruning and a graph reduction method that finds candidate routes promoting long minimum lifetimes. When arbitrarily many routes from a node to the base station are permitted, optimal routes may be found as the solution to a well-known linear program. We present an evolutionary algorithm that finds good routes when each node is allowed only a small number of paths to the base station. On a real network deployed in the Victoria & Albert Museum, London, these solutions, using only three paths per node, are able to achieve minimum lifetimes of over 99% of the optimum linear program solution's time to first sensor battery failure.

  20. Quantitative Approach to Incorporating Stakeholder Values into Total Maximum Daily Loads: Dominguez Channel Case Study

    SciTech Connect

    Stewart, J; Greene, G; Smith, A; Sicherman, A

    2005-03-03

    The federal Clean Water Act (CWA) Section 303(d)(1)(A) requires each state to conduct a biennial assessment of its waters, and identify those waters that are not achieving water quality standards. The result of this assessment is called the 303(d) list. The CWA also requires states to establish a priority ranking for waters on the 303(d) list of impaired waters and to develop and implement Total Maximum Daily Loads (TMDLs) for these waters. Over 30,000 segments of waterways have been listed as impaired by the Environmental Protection Agency (EPA). The EPA has requested local communities to submit plans to reduce discharges by specified dates or have them developed by the EPA. An investigation of this process found that plans to reduce discharges were being developed based on a wide range of site investigation methods. The Department of Energy requested Lawrence Livermore National Laboratory to develop appropriate tools to assist in improving the TMDL process. The EPA has shown support and encouragement of this effort. Our investigation found that given the resources available to the interested and responsible parties, developing a quantitative stakeholder input process could improve the acceptability of TMDL plans. The first model that we have developed is a stakeholder allocation model (SAM). The SAM uses multi-attribute utility theory to quantitatively structure the preferences of the major stakeholder groups, and develop both individual stakeholder group utility functions and an overall stakeholder utility function for a watershed. The test site we selected was the Dominquez Channel watershed in Los Angeles, California. The major stakeholder groups interviewed were (1) non-profit organizations, (2) industry, (3) government agencies and (4) the city government. The decision-maker that will recommend a final TMDL plan is the Los Angeles Regional Water Quality Control Board (LARWQCB). The preliminary results have shown that stakeholders can have different

  1. Scalable pumping approach for extracting the maximum TEM(00) solar laser power.

    PubMed

    Liang, Dawei; Almeida, Joana; Vistas, Cláudia R

    2014-10-20

    A scalable TEM(00) solar laser pumping approach is composed of four pairs of first-stage Fresnel lens-folding mirror collectors, four fused-silica secondary concentrators with light guides of rectangular cross-section for radiation homogenization, four hollow two-dimensional compound parabolic concentrators for further concentration of uniform radiations from the light guides to a 3 mm diameter, 76 mm length Nd:YAG rod within four V-shaped pumping cavities. An asymmetric resonator ensures an efficient large-mode matching between pump light and oscillating laser light. Laser power of 59.1 W TEM(00) is calculated by ZEMAX and LASCAD numerical analysis, revealing 20 times improvement in brightness figure of merit.

  2. Maximum a posteriori blind image deconvolution with Huber-Markov random-field regularization.

    PubMed

    Xu, Zhimin; Lam, Edmund Y

    2009-05-01

    We propose a maximum a posteriori blind deconvolution approach using a Huber-Markov random-field model. Compared with the conventional maximum-likelihood method, our algorithm not only suppresses noise effectively but also significantly alleviates the artifacts produced by the deconvolution process. The performance of this method is demonstrated by computer simulations.

  3. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  4. MLgsc: A Maximum-Likelihood General Sequence Classifier.

    PubMed

    Junier, Thomas; Hervé, Vincent; Wunderlin, Tina; Junier, Pilar

    2015-01-01

    We present software package for classifying protein or nucleotide sequences to user-specified sets of reference sequences. The software trains a model using a multiple sequence alignment and a phylogenetic tree, both supplied by the user. The latter is used to guide model construction and as a decision tree to speed up the classification process. The software was evaluated on all the 16S rRNA gene sequences of the reference dataset found in the GreenGenes database. On this dataset, the software was shown to achieve an error rate of around 1% at genus level. Examples of applications based on the nitrogenase subunit NifH gene and a protein-coding gene found in endospore-forming Firmicutes is also presented. The programs in the package have a simple, straightforward command-line interface for the Unix shell, and are free and open-source. The package has minimal dependencies and thus can be easily integrated in command-line based classification pipelines.

  5. Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.

    2013-01-01

    Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…

  6. Quasi-likelihood for Spatial Point Processes

    PubMed Central

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    2014-01-01

    Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970

  7. Experimental application of the "total maximum daily load" approach as a tool for WFD implementation in temporary rivers

    NASA Astrophysics Data System (ADS)

    Lo Porto, A.; De Girolamo, A. M.; Santese, G.

    2012-04-01

    In this presentation, the experience gained in the first experimental use in the UE (as far as we know) of the concept and methodology of the "Total Maximum Daily Load" (TMDL) is reported. The TMDL is an instrument required in the Clean Water Act in U.S.A for the management of water bodies classified impaired. The TMDL calculates the maximum amount of a pollutant that a waterbody can receive and still safely meet water quality standards. It permits to establish a scientifically-based strategy on the regulation of the emission loads control according to the characteristic of the watershed/basin. The implementation of the TMDL is a process analogous to the Programmes of Measures required by the WFD, the main difference being the analysis of the linkage between loads of different sources and the water quality of water bodies. The TMDL calculation was used in this study for the Candelaro River, a temporary Italian river, classified impaired in the first steps of the implementation of the WFD. A specific approach based on the "Load Duration Curves" was adopted for the calculation of nutrient TMDLs due to the more robust approach specific for rivers featuring large changes in river flow compared to the classic approach based on average long term flow conditions. This methodology permits to establish the maximum allowable loads across to the different flow conditions of a river. This methodology enabled: to evaluate the allowable loading of a water body; to identify the sources and estimate their loads; to estimate the total loading that the water bodies can receives meeting the water quality standards established; to link the effects of point and diffuse sources on the water quality status and finally to individuate the reduction necessary for each type of sources. The loads reductions were calculated for nitrate, total phosphorus and ammonia. The simulated measures showed a remarkable ability to reduce the pollutants for the Candelaro River. The use of the Soil and

  8. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  9. Dynamical Evolution of the Inner Heliosphere Approaching Solar Activity Maximum: Interpreting Ulysses Observations Using a Global MHD Model. Appendix 1

    NASA Technical Reports Server (NTRS)

    Riley, Pete; Mikic, Z.; Linker, J. A.

    2003-01-01

    In this study we describe a series of MHD simulations covering the time period from 12 January 1999 to 19 September 2001 (Carrington Rotation 1945 to 1980). This interval coincided with: (1) the Sun s approach toward solar maximum; and (2) Ulysses second descent to the southern polar regions, rapid latitude scan, and arrival into the northern polar regions. We focus on the evolution of several key parameters during this time, including the photospheric magnetic field, the computed coronal hole boundaries, the computed velocity profile near the Sun, and the plasma and magnetic field parameters at the location of Ulysses. The model results provide a global context for interpreting the often complex in situ measurements. We also present a heuristic explanation of stream dynamics to describe the morphology of interaction regions at solar maximum and contrast it with the picture that resulted from Ulysses first orbit, which occurred during more quiescent solar conditions. The simulation results described here are available at: http://sun.saic.com.

  10. Approaching the ground states of the random maximum two-satisfiability problem by a greedy single-spin flipping process

    NASA Astrophysics Data System (ADS)

    Ma, Hui; Zhou, Haijun

    2011-05-01

    In this brief report we explore the energy landscapes of two spin glass models using a greedy single-spin flipping process, Gmax. The ground-state energy density of the random maximum two-satisfiability problem is efficiently approached by Gmax. The achieved energy density e(t) decreases with the evolution time t as e(t)-e(∞)=h(log10t)-z with a small prefactor h and a scaling coefficient z>1, indicating an energy landscape with deep and rugged funnel-shape regions. For the ±J Viana-Bray spin glass model, however, the greedy single-spin dynamics quickly gets trapped to a local minimal region of the energy landscape.

  11. The effect of variation in the apolipoprotein B gene on plasmid lipid and apolipoprotein B levels. I. A likelihood-based approach to cladistic analysis.

    PubMed

    Hallman, D M; Visvikis, S; Steinmetz, J; Boerwinkle, E

    1994-01-01

    A new method is described for employing family data to test for significant haplotype effects on continuously distributed variables, using likelihood-ratio tests of linear models in which haplotype effects are parameterized and familial correlations taken into account. The method is applied to the apolipoprotein B (Apo B) gene, using 5 polymorphisms (Insertion/deletion, Bsp1286I, XbaI, MspI, EcoRI) to define haplotypes in 121 French nuclear families. Eleven haplotypes were found, five of which, combined, account for over 95% of the sample. A haplotype phylogeny is proposed, and is used to define a nested set of models for testing the effects of Apo B variation on total-, low-density-lipoprotein (LDL)-, and high-density-lipoprotein (HDL)-cholesterol, triglyceride, and Apo B levels. Apo B haplotype effects account for about 10% of the genetic variance and 5% of the total variance in HDL-cholesterol and triglyceride levels. Clusters of evolutionarily-related haplotypes with similar phenotypic effects are identified for HDL-cholesterol and triglycerides. Single haplotypes with statistically significant effects are identified for cholesterol, LDL-cholesterol, and Apo B levels.

  12. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  13. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).

    PubMed

    Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  14. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).

    PubMed

    Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.

  15. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  16. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7

  17. Analysis of neighborhood dynamics of forest ecosystems using likelihood methods and modeling.

    PubMed

    Canham, Charles D; Uriarte, María

    2006-02-01

    Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate

  18. A New Approach to Identify Optimal Properties of Shunting Elements for Maximum Damping of Structural Vibration Using Piezoelectric Patches

    NASA Technical Reports Server (NTRS)

    Park, Junhong; Palumbo, Daniel L.

    2004-01-01

    The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.

  19. Approach to estimating the maximum depth for glacially induced hydraulic jacking in fractured crystalline rock at Forsmark, Sweden

    NASA Astrophysics Data System (ADS)

    Lönnqvist, M.; Hökmark, H.

    2013-09-01

    Hydraulic jacking is a significant dilation of a fracture that occurs when the pore pressure within it exceeds the sum of the fracture's normal stress and tensile strength. This phenomenon may occur during a glacial period because of changes in hydraulic and mechanical boundary conditions. Since hydraulic jacking may alter flow patterns and the transport capacity of the rock mass, its possible effects on the long-term performance of a nuclear waste repository should be considered. We develop an approach to assess glacially induced hydraulic jacking in fractured crystalline rock and establish bounding estimates of the maximum jacking depth for the Swedish Nuclear Fuel and Waste Management Company's (SKB) repository site at Forsmark. The pore pressure is estimated using mechanically uncoupled two-dimensional poroelastic continuum models with hydraulic and mechanical conditions based on SKB's reconstruction of the Weichselian glaciation at this site (120-0 ka B.P.). For warm-based conditions, the water pressure at the ice/bed interface is set at 98% of the mechanical load, whereas for glacial conditions with extensive proglacial permafrost, the corresponding water pressure is set at a (lower) annual average value. We demonstrate that the pore pressure within the uppermost kilometer of rock is mainly governed by the water pressure at the ice/bed interface and that the mechanical impact of the ice load on the pore pressure is sufficiently small to be ignored. Given the current and estimated future stress conditions at Forsmark, hydraulic jacking is mainly of concern for subhorizontal fractures, i.e., it is sufficient to consider situations when the pore pressure exceeds the vertical stress. We conclude that hydraulic jacking at Forsmark will be confined to the uppermost 200 m of the rock mass.

  20. Parametric likelihood inference for interval censored competing risks data

    PubMed Central

    Hudgens, Michael G.; Li, Chenxi

    2014-01-01

    Summary Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV. PMID:24400873

  1. Likelihood reinstates Archaeopteryx as a primitive bird

    PubMed Central

    Lee, Michael S. Y.; Worthy, Trevor H.

    2012-01-01

    The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx–deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx–bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics. PMID:22031726

  2. On the likelihood of forests

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  3. Efficient computations with the likelihood ratio distribution.

    PubMed

    Kruijver, Maarten

    2015-01-01

    What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.

  4. Spatial And Quantitative Approache to Incorporating Stakeholder Values into Total Maximum Daily Loads: Dominguez Channel Case Study Final Report

    SciTech Connect

    Stewart, J; Baginski, T; Sicherman, A; Greene, G; Smith, A

    2007-02-05

    Under the Federal Clean Water Act (CWA) states are required to develop and implement Total Maximum Daily Loads (TMDLs) for waters that are not achieving water quality standards. A TMDL specifies the maximum amount of a pollutant that a water body can receive, and allocates the pollutant loadings to point and non-point sources. Lawrence Livermore National Laboratory (LLNL) developed a tool to assist in improving the TMDL process. We developed a stakeholder allocation model (SAM) which uses multi-attribute utility theory to quantitatively structure the preferences of the major stakeholder groups. We then applied a Geographic Information System (GIS) to visualize the results. We used the Dominguez Channel Watershed in Los Angeles County, CA as our case study.

  5. A hybrid likelihood algorithm for risk modelling.

    PubMed

    Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D

    1995-03-01

    The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general. PMID:7604154

  6. Spatial and Quantitative Approach to Incorporating Stakeholder Values into Total Maximum Daily Loads: Dominguez Channel Case Study

    SciTech Connect

    Stewart, J S; Baginski, T A; Greene, K G; Smith, A; Sicherman, A

    2006-06-23

    The Federal Clean Water Act (CWA) Section 303(d)(1)(A) requires each state to identify those waters that are not achieving water quality standards. The result of this assessment is called the 303(d) list. The CWA also requires states to develop and implement Total Maximum Daily Loads (TMDLs) for these waters on the 303(d) list. A TMDL specifies the maximum amount of a pollutant that a water body can receive and still meet water quality standards, and allocates the pollutant loadings to point and non-point sources. Nationwide, over 34,900 segments of waterways have been listed as impaired by the Environmental Protection Agency (EPA 2006). The EPA enlists state agencies and local communities to submit TMDL plans to reduce discharges by specified dates or have them developed by the EPA. The Department of Energy requested Lawrence Livermore National Laboratory (LLNL) to develop appropriate tools to assist in improving the TMDL process. An investigation of this process by LLNL found that plans to reduce discharges were being developed based on a wide range of site investigation methods. Our investigation found that given the resources available to the interested and responsible parties, developing a quantitative stakeholder input process and using visualization tools to display quantitative information could improve the acceptability of TMDL plans. We developed a stakeholder allocation model (SAM) which uses multi-attribute utility theory to quantitatively structure the preferences of the major stakeholder groups. We then applied GIS to display allocation options in maps representing economic activity, community groups, and city agencies. This allows allocation options and stakeholder concerns to be represented in both space and time. The primary goal of this tool is to provide a quantitative and visual display of stakeholder concerns over possible TMDL options.

  7. 76 FR 78256 - Request for Nominations of Experts for the Review of Approaches To Derive a Maximum Contaminant...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-16

    ... EPA's approaches to derive an MCLG for perchlorate. In 2011, EPA announced its decision (76 FR 7762... requests contact information about the person making the nomination; contact information about the nominee... other factors, can be influenced by work history and affiliation), and the collective breadth...

  8. MARGINAL EMPIRICAL LIKELIHOOD AND SURE INDEPENDENCE FEATURE SCREENING

    PubMed Central

    Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao

    2013-01-01

    We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach. PMID:24415808

  9. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India.

    PubMed

    Deshpande, Paritosh C; Tilwankar, Atit K; Asolekar, Shyam R

    2012-11-01

    The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm(3) and 428 μg/Nm(3) (at 4m/s and 1m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm(3). These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm(3)). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India.

  10. Maximum Jailbreak

    NASA Astrophysics Data System (ADS)

    Singleton, B.

    First formulated one hundred and fifty years ago by the heretical scholar Nikolai Federov, the doctrine of cosmism begins with an absolute refusal to treat the most basic factors conditioning life on Earth ­ gravity and death ­ as necessary constraints on action. As manifest through the intoxicated cheers of its early advocates that humans should storm the heavens and conquer death, cosmism's foundational gesture was to conceive of the earth as a trap. Its duty was therefore to understand the duty of philosophy, economics and design to be the creation of means to escape it. This could be regarded as a jailbreak at the maximum possible scale, a heist in which the human species could steal itself from the vault of the Earth. After several decades of relative disinterest new space ventures are inspiring scientific, technological and popular imaginations, this essay explores what kind of cosmism might be constructed today. In this paper cosmism's position as a means of escape is both reviewed and evaluated by reflecting on the potential of technology that actually can help us achieve its aims and also through the lens and state-ofthe-art philosophy of accelerationism, which seeks to outrun modern tropes by intensifying them.

  11. Development of a Quantitative Approach to Incorporating Stakeholder Values into Total Maximum Daily Loads: Dominguez Channel Case Study

    SciTech Connect

    Stewart, J

    2004-06-08

    The federal Clean Water Act (CWA) Section 303(d)(1)(A) requires each state to conduct a biennial assessment of its waters, and identify those waters that are not achieving water quality standards. The result of this assessment is called the 303(d) list. The CWA also requires states to establish a priority ranking for waters on the 303(d) list of impaired waters and to develop and implement Total Maximum Daily Loads (TMDLs) for these waters. Over 30,000 segments of waterways have been listed as impaired by the Environmental Protection Agency (EPA). The EPA has requested local communities to submit plans to reduce discharges by specified dates or have them developed by the EPA. An investigation of this process found that plans to reduce discharges were being developed based on a wide range of site investigation methods. The Department of Energy requested Lawrence Livermore National Laboratory to develop appropriate tools to assist in improving the TMDL process. The EPA has shown support and encouragement of this effort. Our investigation found that improving the stakeholder input process would facilitate many of the TMDL processes, given the resources available to the interested and responsible parties. The first model that we have developed is a stakeholder allocation model (SAM). The SAM uses multi-attribute utility theory to quantitatively structure the preferences of the major stakeholder groups, and develop both individual stakeholder group utility functions and overall stakeholder utility function for a watershed. The test site we selected was the Dominquez Channel watershed in Los Angeles, California. The major stakeholder groups interviewed were (1) nongovernmental organizations, (2) oil refineries, (3) the Port of Los Angeles, and (4) the Los Angeles Department of Public Works. The decision-maker that will determine the acceptable utility values is the Los Angeles Regional Water Quality Board. The preliminary results have shown some different values among

  12. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  13. Fast inference in generalized linear models via expected log-likelihoods.

    PubMed

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  14. Likelihood estimation in image warping

    NASA Astrophysics Data System (ADS)

    Machado, Alexei M. C.; Campos, Mario F.; Gee, James C.

    1999-05-01

    The problem of matching two images can be posed as the search for a displacement field which assigns each point of one image to a point in the second image in such a way that a likelihood function is maximized ruled by topological constraints. Since the images may be acquired by different scanners, the intensity relationship between intensity levels is generally unknown. The matching problem is usually solved iteratively by optimization methods. The evaluation of each candidate solution is based on an objective function which favors smooth displacements that yield likely intensity matches. This paper is concerned with the construction of a likelihood function that is derived from the information contained in the data and is thus applicable to data acquired from an arbitrary scanner. The basic assumption of the method is that the pair of images to be matched is assumed to contain roughly the same proportion of tissues, which will be reflected in their gray-level histograms. Experiments with MRI images corrupted with strong non-linear intensity shading show the method's effectiveness for modeling intensity artifacts. Image matching can thus be made robust to a wide range of intensity degradations.

  15. Maximum-Entropy Inference with a Programmable Annealer

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  16. Maximum-Entropy Inference with a Programmable Annealer.

    PubMed

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A

    2016-03-03

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  17. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  18. Quasi-likelihood estimation for relative risk regression models.

    PubMed

    Carter, Rickey E; Lipsitz, Stuart R; Tilley, Barbara C

    2005-01-01

    For a prospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log-linear model with a Bernoulli likelihood can lead to convergence problems in the Newton-Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology 123, 174-184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine 320, 1709-1713). PMID:15618526

  19. A Generalized, Likelihood-Free Method for Posterior Estimation

    PubMed Central

    Turner, Brandon M.; Sederberg, Per B.

    2014-01-01

    Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds, and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the Linear Ballistic Accumulator model, which has a known likelihood, and the Leaky Competing Accumulator model whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics – a feat that was not previously possible. PMID:24258272

  20. Likelihood Methods for Testing Group Problem Solving Models with Censored Data.

    ERIC Educational Resources Information Center

    Regal, Ronald R.; Larntz, Kinley

    1978-01-01

    Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)

  1. An analytical approach for beam loading compensation and excitation of maximum cavity field gradient in a coupled cavity-waveguide system

    NASA Astrophysics Data System (ADS)

    Kelisani, M. Dayyani; Doebert, S.; Aslaninejad, M.

    2016-08-01

    The critical process of beam loading compensation in high intensity accelerators brings under control the undesired effect of the beam induced fields to the accelerating structures. A new analytical approach for optimizing standing wave accelerating structures is found which is hugely fast and agrees very well with simulations. A perturbative analysis of cavity and waveguide excitation based on the Bethe theorem and normal mode expansion is developed to compensate the beam loading effect and excite the maximum field gradient in the cavity. The method provides the optimum values for the coupling factor and the cavity detuning. While the approach is very accurate and agrees well with simulation software, it massively shortens the calculation time compared with the simulation software.

  2. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  3. Groups, information theory, and Einstein's likelihood principle

    NASA Astrophysics Data System (ADS)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  4. Groups, information theory, and Einstein's likelihood principle.

    PubMed

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  5. Groups, information theory, and Einstein's likelihood principle.

    PubMed

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts. PMID:27176234

  6. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  7. A Bayesian Maximum Entropy approach to address the change of support problem in the spatial analysis of childhood asthma prevalence across North Carolina

    PubMed Central

    LEE, SEUNG-JAE; YEATTS, KARIN; SERRE, MARC L.

    2009-01-01

    The spatial analysis of data observed at different spatial observation scales leads to the change of support problem (COSP). A solution to the COSP widely used in linear spatial statistics consists in explicitly modeling the spatial autocorrelation of the variable observed at different spatial scales. We present a novel approach that takes advantage of the non-linear Bayesian Maximum Entropy (BME) extension of linear spatial statistics to address the COSP directly without relying on the classical linear approach. Our procedure consists in modeling data observed over large areas as soft data for the process at the local scale. We demonstrate the application of our approach to obtain spatially detailed maps of childhood asthma prevalence across North Carolina (NC). Because of the high prevalence of childhood asthma in NC, the small number problem is not an issue, so we can focus our attention solely to the COSP of integrating prevalence data observed at the county-level together with data observed at a targeted local scale equivalent to the scale of school-districts. Our spatially detailed maps can be used for different applications ranging from exploratory and hypothesis generating analyses to targeting intervention and exposure mitigation efforts. PMID:20300553

  8. Use of the Maximum Cumulative Ratio As an Approach for Prioritizing Aquatic Coexposure to Plant Protection Products: A Case Study of a Large Surface Water Monitoring Database.

    PubMed

    Vallotton, Nathalie; Price, Paul S

    2016-05-17

    This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments. PMID:27057923

  9. Use of the Maximum Cumulative Ratio As an Approach for Prioritizing Aquatic Coexposure to Plant Protection Products: A Case Study of a Large Surface Water Monitoring Database.

    PubMed

    Vallotton, Nathalie; Price, Paul S

    2016-05-17

    This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments.

  10. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent.

    PubMed

    Lohse, Konrad; Chmelik, Martin; Martin, Simon H; Barton, Nicholas H

    2016-02-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666

  11. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent

    PubMed Central

    Lohse, Konrad; Chmelik, Martin; Martin, Simon H.; Barton, Nicholas H.

    2016-01-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666

  12. A maximum entropy method for MEG source imaging

    SciTech Connect

    Khosla, D. |; Singh, M.

    1996-12-31

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible images which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.

  13. A likelihood method to cross-calibrate air-shower detectors

    NASA Astrophysics Data System (ADS)

    Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko

    2016-01-01

    We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.

  14. Computation of maximum gust loads in nonlinear aircraft using a new method based on the matched filter approach and numerical optimization

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Heeg, Jennifer; Perry, Boyd, III

    1990-01-01

    Time-correlated gust loads are time histories of two or more load quantities due to the same disturbance time history. Time correlation provides knowledge of the value (magnitude and sign) of one load when another is maximum. At least two analysis methods have been identified that are capable of computing maximized time-correlated gust loads for linear aircraft. Both methods solve for the unit-energy gust profile (gust velocity as a function of time) that produces the maximum load at a given location on a linear airplane. Time-correlated gust loads are obtained by re-applying this gust profile to the airplane and computing multiple simultaneous load responses. Such time histories are physically realizable and may be applied to aircraft structures. Within the past several years there has been much interest in obtaining a practical analysis method which is capable of solving the analogous problem for nonlinear aircraft. Such an analysis method has been the focus of an international committee of gust loads specialists formed by the U.S. Federal Aviation Administration and was the topic of a panel discussion at the Gust and Buffet Loads session at the 1989 SDM Conference in Mobile, Alabama. The kinds of nonlinearities common on modern transport aircraft are indicated. The Statical Discrete Gust method is capable of being, but so far has not been, applied to nonlinear aircraft. To make the method practical for nonlinear applications, a search procedure is essential. Another method is based on Matched Filter Theory and, in its current form, is applicable to linear systems only. The purpose here is to present the status of an attempt to extend the matched filter approach to nonlinear systems. The extension uses Matched Filter Theory as a starting point and then employs a constrained optimization algorithm to attack the nonlinear problem.

  15. Setting the renormalization scale in perturbative QCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    NASA Astrophysics Data System (ADS)

    Ma, Hong-Hao; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.; Mojaza, Matin

    2015-05-01

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the "Principle of Maximum Conformality" (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the "sequential extended BLM" (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0 -expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. We then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio Re+e- at four-loop order in pQCD.

  16. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    SciTech Connect

    Ma, Hong -Hao; Wu, Xing -Gang; Ma, Yang; Brodsky, Stanley J.; Mojaza, Matin

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  17. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C.

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  18. Likelihood-based population independent component analysis

    PubMed Central

    Eloyan, Ani; Crainiceanu, Ciprian M.; Caffo, Brian S.

    2013-01-01

    Independent component analysis (ICA) is a widely used technique for blind source separation, used heavily in several scientific research areas including acoustics, electrophysiology, and functional neuroimaging. We propose a scalable two-stage iterative true group ICA methodology for analyzing population level functional magnetic resonance imaging (fMRI) data where the number of subjects is very large. The method is based on likelihood estimators of the underlying source densities and the mixing matrix. As opposed to many commonly used group ICA algorithms, the proposed method does not require significant data reduction by a 2-fold singular value decomposition. In addition, the method can be applied to a large group of subjects since the memory requirements are not restrictive. The performance of our approach is compared with a commonly used group ICA algorithm via simulation studies. Furthermore, the proposed method is applied to a large collection of resting state fMRI datasets. The results show that established brain networks are well recovered by the proposed algorithm. PMID:23314416

  19. Likelihood methods for cluster dark energy surveys

    SciTech Connect

    Hu, Wayne; Cohn, J. D.

    2006-03-15

    Galaxy cluster counts at high redshift, binned into spatial pixels and binned into ranges in an observable proxy for mass, contain a wealth of information on both the dark energy equation of state and the mass selection function required to extract it. The likelihood of the number counts follows a Poisson distribution whose mean fluctuates with the large-scale structure of the universe. We develop a joint likelihood method that accounts for these distributions. Maximization of the likelihood over a theoretical model that includes both the cosmology and the observable-mass relations allows for a joint extraction of dark energy and cluster structural parameters.

  20. Activation Likelihood Estimation meta-analysis revisited

    PubMed Central

    Eickhoff, Simon B.; Bzdok, Danilo; Laird, Angela R.; Kurth, Florian; Fox, Peter T.

    2011-01-01

    A widely used technique for coordinate-based meta-analysis of neuroimaging data is activation likelihood estimation (ALE), which determines the convergence of foci reported from different experiments. ALE analysis involves modelling these foci as probability distributions whose width is based on empirical estimates of the spatial uncertainty due to the between-subject and between-template variability of neuroimaging data. ALE results are assessed against a null-distribution of random spatial association between experiments, resulting in random-effects inference. In the present revision of this algorithm, we address two remaining drawbacks of the previous algorithm. First, the assessment of spatial association between experiments was based on a highly time-consuming permutation test, which nevertheless entailed the danger of underestimating the right tail of the null-distribution. In this report, we outline how this previous approach may be replaced by a faster and more precise analytical method. Second, the previously applied correction procedure, i.e. controlling the false discovery rate (FDR), is supplemented by new approaches for correcting the family-wise error rate and the cluster-level significance. The different alternatives for drawing inference on meta-analytic results are evaluated on an exemplary dataset on face perception as well as discussed with respect to their methodological limitations and advantages. In summary, we thus replaced the previous permutation algorithm with a faster and more rigorous analytical solution for the null-distribution and comprehensively address the issue of multiple-comparison corrections. The proposed revision of the ALE-algorithm should provide an improved tool for conducting coordinate-based meta-analyses on functional imaging data. PMID:21963913

  1. Likelihood-Free Inference in High-Dimensional Models.

    PubMed

    Kousathanas, Athanasios; Leuenberger, Christoph; Helfer, Jonas; Quinodoz, Mathieu; Foll, Matthieu; Wegmann, Daniel

    2016-06-01

    Methods that bypass analytical evaluations of the likelihood function have become an indispensable tool for statistical inference in many fields of science. These so-called likelihood-free methods rely on accepting and rejecting simulations based on summary statistics, which limits them to low-dimensional models for which the value of the likelihood is large enough to result in manageable acceptance rates. To get around these issues, we introduce a novel, likelihood-free Markov chain Monte Carlo (MCMC) method combining two key innovations: updating only one parameter per iteration and accepting or rejecting this update based on subsets of statistics approximately sufficient for this parameter. This increases acceptance rates dramatically, rendering this approach suitable even for models of very high dimensionality. We further derive that for linear models, a one-dimensional combination of statistics per parameter is sufficient and can be found empirically with simulations. Finally, we demonstrate that our method readily scales to models of very high dimensionality, using toy models as well as by jointly inferring the effective population size, the distribution of fitness effects (DFE) of segregating mutations, and selection coefficients for each locus from data of a recent experiment on the evolution of drug resistance in influenza. PMID:27052569

  2. Likelihood-Free Inference in High-Dimensional Models.

    PubMed

    Kousathanas, Athanasios; Leuenberger, Christoph; Helfer, Jonas; Quinodoz, Mathieu; Foll, Matthieu; Wegmann, Daniel

    2016-06-01

    Methods that bypass analytical evaluations of the likelihood function have become an indispensable tool for statistical inference in many fields of science. These so-called likelihood-free methods rely on accepting and rejecting simulations based on summary statistics, which limits them to low-dimensional models for which the value of the likelihood is large enough to result in manageable acceptance rates. To get around these issues, we introduce a novel, likelihood-free Markov chain Monte Carlo (MCMC) method combining two key innovations: updating only one parameter per iteration and accepting or rejecting this update based on subsets of statistics approximately sufficient for this parameter. This increases acceptance rates dramatically, rendering this approach suitable even for models of very high dimensionality. We further derive that for linear models, a one-dimensional combination of statistics per parameter is sufficient and can be found empirically with simulations. Finally, we demonstrate that our method readily scales to models of very high dimensionality, using toy models as well as by jointly inferring the effective population size, the distribution of fitness effects (DFE) of segregating mutations, and selection coefficients for each locus from data of a recent experiment on the evolution of drug resistance in influenza.

  3. Detection of T-wave Alternans in Fetal Magnetocardiography Using the Generalized Likelihood Ratio Test

    PubMed Central

    Yu, Suhong; Van Veen, Barry D.; Wakai, Ronald T.

    2013-01-01

    T-wave alternans (TWA) is an indicator of cardiac instability and is associated with life-threatening ventricular arrhythmias. Detection of TWA in the adult has been widely investigated and is used routinely for cardiac risk assessment. Detection of TWA in the fetus, however, is much more difficult due to the low amplitude and variable configuration of the signal, the presence of strong interferences, and the brevity of fetal TWA episodes. In this paper we present a statistical detector based on the generalized likelihood ratio test that is designed for detection of TWA in the fetus. The performance of the detector is evaluated by constructing receiver-operator characteristic curves, using simulated data and real data from subjects with macroscopic TWA. The detector is capable of detecting TWA episodes as brief as 20 beats. The detection performance is improved significantly by modeling the fetal T-wave as a low rank, low bandwidth signal, and using maximum likelihood estimation to estimate the model parameters. This approach enables all of the data to be used to estimate the noise statistics, providing highly effective suppression of interference, including maternal interference. The method is suitable for routine use because it can be applied to raw, unprocessed recordings, allowing automated analysis of extended fetal magnetocardiography recordings. PMID:23568477

  4. The likelihood of observing dust-stimulated phytoplankton growth in waters proximal to the Australian continent

    NASA Astrophysics Data System (ADS)

    Cropp, R. A.; Gabric, A. J.; Levasseur, M.; McTainsh, G. H.; Bowie, A.; Hassler, C. S.; Law, C. S.; McGowan, H.; Tindale, N.; Viscarra Rossel, R.

    2013-05-01

    We develop a tool to assist in identifying a link between naturally occurring aeolian dust deposition and phytoplankton response in the ocean. Rather than examining a single, or small number of dust deposition events, we take a climatological approach to estimate the likelihood of observing a definitive link between dust deposition and a phytoplankton bloom for the oceans proximal to the Australian continent. We use a dust storm index (DSI) to determine dust entrainment in the Lake Eyre Basin (LEB) and an ensemble of modelled atmospheric trajectories of dust transport from the basin, the major dust source in Australia. Deposition into the ocean is computed as a function of distance from the LEB source and the local over-ocean precipitation. The upper ocean's receptivity to nutrients, including dust-borne iron, is defined in terms of time-dependent, monthly climatological fields for light, mixed layer depth and chlorophyll concentration relative to the climatological monthly maximum. The resultant likelihood of a dust-phytoplankton link being observed is then mapped as a function of space and time. Our results suggest that the Southern Ocean (north of 45°S), the North West Shelf, and Great Barrier Reef are ocean regions where a rapid biological response to dust inputs is most likely to be observed. Conversely, due to asynchrony between deposition and ocean receptivity, direct causal links appear unlikely to be observed in the Tasman Sea and Southern Ocean south of 45°S.

  5. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  6. Shifting Distributions of Adult Atlantic Sturgeon Amidst Post-Industrialization and Future Impacts in the Delaware River: a Maximum Entropy Approach

    PubMed Central

    Breece, Matthew W.; Oliver, Matthew J.; Cimino, Megan A.; Fox, Dewayne A.

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960’s. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  7. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  8. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  9. MALCOM X: Combining maximum likelihood continuity mapping with Gaussian mixture models

    SciTech Connect

    Hogden, J.; Scovel, J.C.

    1998-11-01

    GMMs are among the best speaker recognition algorithms currently available. However, the GMM`s estimate of the probability of the speech signal does not change if the authors randomly shuffle the temporal order of the feature vectors, even though the actual probability of observing the shuffled signal would be dramatically different--probably near zero. A potential way to improve the performance of GMMs is to incorporate temporal information into the estimate of the probability of the data. Doing so could improve speech recognition, speaker recognition, and potentially aid in detecting lies (abnormalities) in speech data. As described in other documents (Hogden, 1996), MALCOM is an algorithm that can be used to estimate the probability of a sequence of categorical data. MALCOM can also be applied to speech (and other real valued sequences) if windows of the speech are first categorized using a technique such as vector quantization (Gray, 1984). However, by quantizing the windows of speech, MALCOM ignores information about the within-category differences of the speech windows. Thus, MALCOM and GMMs complement each other: MALCOM is good at using sequence information whereas GMMs capture within-category differences better than the vector quantization typically used by MALCOM. An extension of MALCOM (MALCOM X) that can be used for estimating the probability of a speech sequence is described here.

  10. Calibrating CAT Pools and Online Pretest Items Using Marginal Maximum Likelihood Methods.

    ERIC Educational Resources Information Center

    Pommerich, Mary; Segall, Daniel O.

    Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…

  11. Iterative Suboptimal Maximum Likelihood Receiver for Nonlinearly Distorted SC-FDMA Symbols

    NASA Astrophysics Data System (ADS)

    Gazda, Juraj; Deumal, Marc; Bergada, Pau; Drotár, Peter; Kocur, Dušan; Galajda, Pavol

    2011-11-01

    Single Carrier Frequency Division Multiple Access (SC-FDMA) is the modulation scheme being employed in the uplink of Long Term Evolution (LTE) and will also be employed in the upcoming fourth generation (4G) system LTE Advanced. The main advantage of using SC-FDMA in the uplink in opposite to the traditional orthogonal frequency division multiple access (OFDM) is its lower sensitivity to nonlinear amplification. Nevertheless, the strict requirements of 4G systems suggest that techniques to reduce sensitivity even further should be introduced. In this paper, we evaluate the effect of nonlinearities in SC-FDMA and present an iterative receiver scheme to reduce the error probability of SC-FDMA systems undergoing nonlinear amplification. The technique consists in successive estimation and compensation of both the nonlinear distortion and the non-constant attenuation and rotation introduced by the transmitter high power amplifier (HPA). Simulation results expressed by the bit error rate (BER), the error vector magnitude (EVM) and the total degradation (TD) show that the proposed iterative receiver provides a significant performance improvement.

  12. Factor Structure of the Personality Research Form-E: A Maximum Likelihood Analysis.

    ERIC Educational Resources Information Center

    Fowler, Patrick C.

    1985-01-01

    Presents a five-factor, structural model of the Personality Research Form-E for 140 university undergraduates. All factors demonstrated an excellent level of similarity to those previously reported for other forms of the PRF, as well as to the conceptual scheme developed by Jackson (1974). (Author/JAC)

  13. Distinguishing between Latent Classes and Continuous Factors: Resolution by Maximum Likelihood?

    ERIC Educational Resources Information Center

    Lubke, Gitta; Neale, Michael C.

    2006-01-01

    Latent variable models exist with continuous, categorical, or both types of latent variables. The role of latent variables is to account for systematic patterns in the observed responses. This article has two goals: (a) to establish whether, based on observed responses, it can be decided that an underlying latent variable is continuous or…

  14. On Obtaining Estimates of the Fraction of Missing Information from Full Information Maximum Likelihood

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2012-01-01

    Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…

  15. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the SSA. A Landsat-5 TM image from 02-Sep- 1994 was used to derive the classification. A technique was implemented that uses reflectances of various land cover types along with a geometric optical canopy model to produce spectral trajectories. These trajectories are used as training data to classify the image into the different land cover classes. These data are provided in a binary image file format. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).

  16. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  17. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations

    ERIC Educational Resources Information Center

    Can, Seda; van de Schoot, Rens; Hox, Joop

    2015-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation…

  18. Maximum likelihood methods reveal conservation of function among closely related kinesin families.

    PubMed

    Lawrence, Carolyn J; Malmberg, Russell L; Muszynski, Michael G; Dawe, R Kelly

    2002-01-01

    We have reconstructed the evolution of the anciently derived kinesin superfamily using various alignment and tree-building methods. In addition to classifying previously described kinesins from protists, fungi, and animals, we analyzed a variety of kinesin sequences from the plant kingdom including 12 from Zea mays and 29 from Arabidopsis thaliana. Also included in our data set were four sequences from the anciently diverged amitochondriate protist Giardia lamblia. The overall topology of the best tree we found is more likely than previously reported topologies and allows us to make the following new observations: (1) kinesins involved in chromosome movement including MCAK, chromokinesin, and CENP-E may be descended from a single ancestor; (2) kinesins that form complex oligomers are limited to a monophyletic group of families; (3) kinesins that crosslink antiparallel microtubules at the spindle midzone including BIMC, MKLP, and CENP-E are closely related; (4) Drosophila NOD and human KID group with other characterized chromokinesins; and (5) Saccharomyces SMY1 groups with kinesin-I sequences, forming a family of kinesins capable of class V myosin interactions. In addition, we found that one monophyletic clade composed exclusively of sequences with a C-terminal motor domain contains all known minus end-directed kinesins.

  19. Strategies for Handling Missing Data with Maximum Likelihood Estimation in Career and Technical Education Research

    ERIC Educational Resources Information Center

    Lee, In Heok

    2012-01-01

    Researchers in career and technical education often ignore more effective ways of reporting and treating missing data and instead implement traditional, but ineffective, missing data methods (Gemici, Rojewski, & Lee, 2012). The recent methodological, and even the non-methodological, literature has increasingly emphasized the importance of…

  20. Approximate and Pseudo-Likelihood Analysis for Logistic Regression Using External Validation Data to Model Log Exposure

    PubMed Central

    KUPPER, Lawrence L.

    2012-01-01

    A common goal in environmental epidemiologic studies is to undertake logistic regression modeling to associate a continuous measure of exposure with binary disease status, adjusting for covariates. A frequent complication is that exposure may only be measurable indirectly, through a collection of subject-specific variables assumed associated with it. Motivated by a specific study to investigate the association between lung function and exposure to metal working fluids, we focus on a multiplicative-lognormal structural measurement error scenario and approaches to address it when external validation data are available. Conceptually, we emphasize the case in which true untransformed exposure is of interest in modeling disease status, but measurement error is additive on the log scale and thus multiplicative on the raw scale. Methodologically, we favor a pseudo-likelihood (PL) approach that exhibits fewer computational problems than direct full maximum likelihood (ML) yet maintains consistency under the assumed models without necessitating small exposure effects and/or small measurement error assumptions. Such assumptions are required by computationally convenient alternative methods like regression calibration (RC) and ML based on probit approximations. We summarize simulations demonstrating considerable potential for bias in the latter two approaches, while supporting the use of PL across a variety of scenarios. We also provide accessible strategies for obtaining adjusted standard errors to accompany RC and PL estimates. PMID:24027381