Science.gov

Sample records for maximum likelihood approach

  1. A Unified Maximum Likelihood Approach to Document Retrieval.

    ERIC Educational Resources Information Center

    Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex

    2001-01-01

    Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)

  2. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  3. Bayesian Monte Carlo and Maximum Likelihood Approach for ...

    EPA Pesticide Factsheets

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien

  4. Bayesian Monte Carlo and Maximum Likelihood Approach for ...

    EPA Pesticide Factsheets

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien

  5. A maximum likelihood approach to estimating correlation functions

    SciTech Connect

    Baxter, Eric Jones; Rozo, Eduardo

    2013-12-10

    We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.

  6. Evolution of photosynthetic prokaryotes: a maximum-likelihood mapping approach.

    PubMed Central

    Raymond, Jason; Zhaxybayeva, Olga; Gogarten, J Peter; Blankenship, Robert E

    2003-01-01

    Reconstructing the early evolution of photosynthesis has been guided in part by the geological record, but the complexity and great antiquity of these early events require molecular genetic techniques as the primary tools of inference. Recent genome sequencing efforts have made whole genome data available from representatives of each of the five phyla of bacteria with photosynthetic members, allowing extensive phylogenetic comparisons of these organisms. Here, we have undertaken whole genome comparisons using maximum likelihood to compare 527 unique sets of orthologous genes from all five photosynthetic phyla. Substantiating recent whole genome analyses of other prokaryotes, our results indicate that horizontal gene transfer (HGT) has played a significant part in the evolution of these organisms, resulting in genomes with mosaic evolutionary histories. A small plurality phylogenetic signal was observed, which may be a core of remnant genes not subject to HGT, or may result from a propensity for gene exchange between two or more of the photosynthetic organisms compared. PMID:12594930

  7. An updated maximum likelihood approach to open cluster distance determination

    NASA Astrophysics Data System (ADS)

    Palmer, M.; Arenou, F.; Luri, X.; Masana, E.

    2014-04-01

    Aims: An improved method for estimating distances to open clusters is presented and applied to Hipparcos data for the Pleiades and the Hyades. The method is applied in the context of the historic Pleiades distance problem, with a discussion of previous criticisms of Hipparcos parallaxes. This is followed by an outlook for Gaia, where the improved method could be especially useful. Methods: Based on maximum likelihood estimation, the method combines parallax, position, apparent magnitude, colour, proper motion, and radial velocity information to estimate the parameters describing an open cluster precisely and without bias. Results: We find the distance to the Pleiades to be 120.3 ± 1.5 pc, in accordance with previously published work using the same dataset. We find that error correlations cannot be responsible for the still present discrepancy between Hipparcos and photometric methods. Additionally, the three-dimensional space velocity and physical structure of Pleiades is parametrised, where we find strong evidence of mass segregation. The distance to the Hyades is found to be 46.35 ± 0.35 pc, also in accordance with previous results. Through the use of simulations, we confirm that the method is unbiased, so will be useful for accurate open cluster parameter estimation with Gaia at distances up to several thousand parsec. Appendices are available in electronic form at http://www.aanda.org

  8. Maximum Likelihood, Profile Likelihood, and Penalized Likelihood: A Primer

    PubMed Central

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander

    2014-01-01

    The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method. PMID:24173548

  9. Maximum likelihood signature estimation

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1975-01-01

    Maximum-likelihood estimates are discussed which are based on an unlabeled sample of observations, of unknown parameters in a mixture of normal distributions. Several successive approximation procedures for obtaining such maximum-likelihood estimates are described. These procedures, which are theoretically justified by the local contractibility of certain maps, are designed to take advantage of good initial estimates of the unknown parameters. They can be applied to the signature extension problem, in which good initial estimates of the unknown parameters are obtained from segments which are geographically near the segments from which the unlabeled samples are taken. Additional problems to which these methods are applicable include: estimation of proportions and adaptive classification (estimation of mean signatures and covariances).

  10. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    PubMed Central

    2010-01-01

    Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE) of the species tree (topology, branch lengths, and population sizes) from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE) of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1). The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST) to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units) of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT), the MP-EST method is robust to a small amount of HGT in the dataset. In addition

  11. Bringing the analysis of animal orientation data full circle: model-based approaches with maximum likelihood.

    PubMed

    Fitak, Robert R; Johnsen, Sönke

    2017-08-31

    In studies of animal orientation, data are often represented as directions that can be analyzed using circular statistical methods. Although several circular statistical tests exist to detect the presence of a mean direction, likelihood-based approaches may offer advantages in hypothesis testing - especially when data are multimodal. Unfortunately, likelihood-based inference in animal orientation remains rare. Here, we discuss some of the assumptions and limitations of common circular tests and report a new R package called CircMLE to implement the maximum likelihood analysis of circular data. We illustrate the use of this package on both simulated datasets and an empirical example dataset in Chinook salmon (Oncorhynchus tshawytscha). Our software provides a convenient interface that facilitates the use of model-based approaches in animal orientation studies. © 2017. Published by The Company of Biologists Ltd.

  12. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  13. A maximum likelihood approach to determine sensor radiometric response coefficients for NPP VIIRS reflective solar bands

    NASA Astrophysics Data System (ADS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-10-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager Radiometer Suite (VIIRS) assume that the sensors' radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1 (FU1) (Butler, J., Xiong, X., Oudrari, H., Pan, C., and Gleason, J., "NASA Calibration and Characterization in the NPOESS Preparatory Project (NPP)", IGARSS, July 12-17, 2009, Cape Town, South Africa.), the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor's digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the quadratic model. We show that using the inadequate quadratic model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  14. A maximum likelihood approach to estimating articulator positions from speech acoustics

    SciTech Connect

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  15. TrExML: a maximum-likelihood approach for extensive tree-space exploration.

    PubMed

    Wolf, M J; Easteal, S; Kahn, M; McKay, B D; Jermiin, L S

    2000-04-01

    Maximum-likelihood analysis of nucleotide and amino acid sequences is a powerful approach for inferring phylogenetic relationships and for comparing evolutionary hypotheses. Because it is a computationally demanding and time-consuming process, most algorithms explore only a minute portion of tree-space, with the emphasis on finding the most likely tree while ignoring the less likely, but not significantly worse, trees. However, when such trees exist, it is equally important to identify them to give due consideration to the phylogenetic uncertainty. Consequently, it is necessary to change the focus of these algorithms such that near optimal trees are also identified. This paper presents the Advanced Stepwise Addition Algorithm for exploring tree-space and two algorithms for generating all binary trees on a set of sequences. The Advanced Stepwise Addition Algorithm has been implemented in TrExML, a phylogenetic program for maximum-likelihood analysis of nucleotide sequences. TrExML is shown to be more effective at finding near optimal trees than a similar program, fastDNAml, implying that TrExML offers a better approach to account for phylogenetic uncertainty than has previously been possible. A program, TreeGen, is also described; it generates binary trees on a set of sequences allowing for extensive exploration of tree-space using other programs. TreeGen, TrExML, and the sequence data used to test the programs are available from the following two WWW sites: http://whitetail.bemidji.msus. edu/trexml/and http://jcsmr.anu.edu.au/dmm/humgen.+ ++html.

  16. Maximum-likelihood approaches reveal signatures of positive selection in IL genes in mammals.

    PubMed

    Neves, Fabiana; Abrantes, Joana; Steinke, John W; Esteves, Pedro J

    2014-02-01

    ILs are part of the immune system and are involved in multiple biological activities. ILs have been shown to evolve under positive selection; however, little information exists regarding which codons are specifically selected. By using different codon-based maximum-likelihood (ML) approaches, signatures of positive selection in mammalian ILs were searched for. Sequences of 46 ILs were retrieved from publicly available databases of mammalian genomes to detect signatures of positive selection in individual codons. Evolutionary analyses were conducted under two ML frameworks, the HyPhy package implemented in the Data Monkey Web Server and CODEML implemented in PAML. Signatures of positive selection were found in 28 ILs: IL-1A and B; IL-2, IL-4 to IL-10, IL-12A and B; IL-14 to IL-17A and C; IL-18, IL-20 to IL-22, IL-25, IL-26, IL-27B, IL-31, IL-34, IL-36A; and G. Codons under positive selection varied between 1 and 15. No evidence of positive selection was detected in IL-13; IL-17B and F; IL-19, IL-23, IL-24, IL-27A; or IL-29. Most mammalian ILs have sites evolving under positive selection, which may be explained by the multitude of biological processes in which ILs are enrolled. The results obtained raise hypotheses concerning the ILs functions, which should be pursued by using mutagenesis and crystallographic approaches.

  17. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  18. Soil mapping in northern Thailand based on an radiometrically calibrated Maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Schuler, U.; Herrmann, L.; Rangnugpit, W.; Stahr, K.

    2009-04-01

    area with low background radiation showed different gamma-ray spectra for the respective reference soil groups, so that these points can be used as secondary training data. In conclusion, the calibration of the Maximum likelihood approach with airborne radiometric data offers a promising possibility for efficient soil mapping of larger regions in northern Thailand.

  19. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations

    PubMed Central

    Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.

    2013-01-01

    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem. PMID:23457427

  20. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations.

    PubMed

    Lee, Tai-Sung; Radak, Brian K; Pabis, Anna; York, Darrin M

    2013-01-08

    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem.

  1. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  2. Analysis of Rayleigh waves with circular wavefront: a maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Maranò, Stefano; Hobiger, Manuel; Bergamo, Paolo; Fäh, Donat

    2017-09-01

    Analysis of Rayleigh waves is an important task in seismology and geotechnical investigations. In fact, properties of Rayleigh waves such as velocity and polarization are important observables that carry information about the structure of the subsoil. Applications analysing Rayleigh waves include active and passive seismic surveys. In active surveys, there is a controlled source of seismic energy and the sensors are typically placed near the source. In passive surveys, there is not a controlled source, rather, seismic waves from ambient vibrations are analysed and the sources are assumed to be far outside the array, simplifying the analysis by the assumption of plane waves. Whenever the source is in the proximity of the array of sensors or even within the array it is necessary to model the wave propagation accounting for the circular wavefront. In addition, it is also necessary to model the amplitude decay due to geometrical spreading. This is the case of active seismic surveys in which sensors are located near the seismic source. In this work, we propose a maximum likelihood (ML) approach for the analysis of Rayleigh waves generated at a near source. Our statistical model accounts for the curvature of the wavefront and amplitude decay due to geometrical spreading. Using our method, we show applications on real data of the retrieval of Rayleigh wave dispersion and ellipticity. We employ arrays with arbitrary geometry. Furthermore, we show how it is possible to combine active and passive surveys. This enables us to enlarge the analysable frequency range and therefore the depths investigated. We retrieve properties of Rayleigh waves from both active and passive surveys and show the excellent agreement of the results from the two surveys. In our approach we use the same array of sensors for both the passive and the active survey. This greatly simplifies the logistics necessary to perform a survey.

  3. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  4. C-arm perfusion imaging with a fast penalized maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    Frysch, Robert; Pfeiffer, Tim; Bannasch, Sebastian; Serowy, Steffen; Gugel, Sebastian; Skalej, Martin; Rose, Georg

    2014-03-01

    Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.

  5. Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA).

    PubMed

    Röck, Alexander W; Dür, Arne; van Oven, Mannis; Parson, Walther

    2013-12-01

    The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  6. Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆

    PubMed Central

    Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther

    2013-01-01

    The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335

  7. Maximum-likelihood approach to topological charge fluctuations in lattice gauge theory

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Cheng, M.; Fleming, G. T.; Lin, M. F.; Neil, E. T.; Osborn, J. C.; Rebbi, C.; Rinaldi, E.; Schaich, D.; Schroeder, C.; Voronov, G.; Vranas, P.; Weinberg, E.; Witzel, O.

    2014-07-01

    We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.

  8. Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.

    ERIC Educational Resources Information Center

    Lord, Frederic M.

    There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…

  9. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    PubMed

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.

  10. Vestige: Maximum likelihood phylogenetic footprinting

    PubMed Central

    Wakefield, Matthew J; Maxwell, Peter; Huttley, Gavin A

    2005-01-01

    Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational processes, DNA repair and

  11. Inferring kinetic pathways, rates, and force dependence from nonprocessive optical tweezers experiments: a maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Kalafut, Bennett; Visscher, Koen

    2008-10-01

    Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.

  12. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    PubMed Central

    Cai, Shu; Zhou, Quan; Zhu, Hongbo

    2016-01-01

    Direction of arrival (DOA) estimation using a uniform linear array (ULA) is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML) criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS), and then solve it using semidefinite programming (SDP). We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR) is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion. PMID:27999397

  13. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation.

    PubMed

    Cai, Shu; Zhou, Quan; Zhu, Hongbo

    2016-12-20

    Direction of arrival (DOA) estimation using a uniform linear array (ULA) is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML) criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS), and then solve it using semidefinite programming (SDP). We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR) is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  14. Detection of faint companions in multi-spectral data using a maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Hanley, Kenneth; Devaney, Nicholas; Thiébaut, Éric

    2016-07-01

    Direct, ground-based exoplanet detection is an extremely challenging task requiring extreme adaptive optics (AO) systems and very high contrast. Dedicated planet hunters, such as SPHERE and GPI have been designed with these requirements in mind. Despite this, direct detection is still limited due to the presence of residual speckles. Smith et al.1 described a maximum likelihood estimation technique for the detection of exoplanets in speckle data in which the planet appears to rotate about a host star when observing with an alt-az telescope. We propose the adaptation of this technique to operate on multi-spectral data, such as produced by the integral field spectrographs present on both SPHERE2 or GPI.3 As the speckle pattern approximately scales smoothly with wavelength, it is possible to resample data to a single reference wavelength in which speckles will remain fixed in the wavelength dimension while any companions that are present will exhibit radial motion in a predictable manner. We simulate data comparable to SPHERE and with this we compare the performance of our algorithm with another multi-spectral detection technique; spectral deconvolution. We compare the techniques using a ROC (Receiver Operating Characteristic) analysis.

  15. The Sherpa Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  16. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  17. Maximum-likelihood and markov chain monte carlo approaches to estimate inbreeding and effective size from allele frequency changes.

    PubMed Central

    Laval, Guillaume; SanCristobal, Magali; Chevalet, Claude

    2003-01-01

    Maximum-likelihood and Bayesian (MCMC algorithm) estimates of the increase of the Wright-Malécot inbreeding coefficient, F(t), between two temporally spaced samples, were developed from the Dirichlet approximation of allelic frequency distribution (model MD) and from the admixture of the Dirichlet approximation and the probabilities of fixation and loss of alleles (model MDL). Their accuracy was tested using computer simulations in which F(t) = 10% or less. The maximum-likelihood method based on the model MDL was found to be the best estimate of F(t) provided that initial frequencies are known exactly. When founder frequencies are estimated from a limited set of founder animals, only the estimates based on the model MD can be used for the moment. In this case no method was found to be the best in all situations investigated. The likelihood and Bayesian approaches give better results than the classical F-statistics when markers exhibiting a low polymorphism (such as the SNP markers) are used. Concerning the estimations of the effective population size all the new estimates presented here were found to be better than the F-statistics classically used. PMID:12871924

  18. Collaborative Double Robust Targeted Maximum Likelihood Estimation*

    PubMed Central

    van der Laan, Mark J.; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum

  19. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  20. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  1. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach.

    PubMed

    Alam, M S; Bognar, J G; Cain, S; Yasuda, B J

    1998-03-10

    During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.

  2. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  3. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  4. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  5. Improving soil moisture profile prediction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    NASA Astrophysics Data System (ADS)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-02-01

    The vertical profile of root zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach by a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Increasing update interval from 5 to 50 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  6. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    NASA Astrophysics Data System (ADS)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-07-01

    The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  7. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach

    PubMed Central

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations. PMID:26474313

  8. Bayesian Monte Carlo and maximum likelihood approach for uncertainty estimation and risk management: Application to lake oxygen recovery model.

    PubMed

    Chaudhary, Abhishek; Hantush, Mohamed M

    2017-01-01

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficient for oxygen and wind speed and associated 95% confidence band, which are shown to be consistent with reported measured values at five different lakes. Finally, we illustrate the robustness of the BMCML to solve risk-based water quality management problems, showing that neglecting cross-correlations between parameters could lead to improper required BOD load reduction to achieve the compliance criteria of 5 mg/L.

  9. Maximum likelihood estimates of pairwise rearrangement distances.

    PubMed

    Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R

    2017-06-21

    Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Convolutional codes. II - Maximum-likelihood decoding. III - Sequential decoding

    NASA Technical Reports Server (NTRS)

    Forney, G. D., Jr.

    1974-01-01

    Maximum-likelihood decoding is characterized as the determination of the shortest path through a topological structure called a trellis. Aspects of code structure are discussed along with questions regarding maximum-likelihood decoding on memoryless channels. A general bounding technique is introduced. The technique is used to obtain asymptotic bounds on the probability of error for maximum-likelihood decoding and list-of-2 decoding. The basic features of sequential algorithms are discussed along with a stack algorithm, questions of computational distribution, and the martingale approach to computational bounds.

  11. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach

    PubMed Central

    2013-01-01

    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/. PMID:24683370

  12. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    SciTech Connect

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  13. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    PubMed Central

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-01-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field. PMID:26723596

  14. 230Th and 234Th as coupled tracers of particle cycling in the ocean: A maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Lei; Armstrong, Robert A.; Cochran, J. Kirk; Heilbrun, Christina

    2016-05-01

    We applied maximum likelihood estimation to measurements of Th isotopes (234,230Th) in Mediterranean Sea sediment traps that separated particles according to settling velocity. This study contains two unique aspects. First, it relies on settling velocities that were measured using sediment traps, rather than on measured particle sizes and an assumed relationship between particle size and sinking velocity. Second, because of the labor and expense involved in obtaining these data, they were obtained at only a few depths, and their analysis required constructing a new type of box-like model, which we refer to as a "two-layer" model, that we then analyzed using likelihood techniques. Likelihood techniques were developed in the 1930s by statisticians, and form the computational core of both Bayesian and non-Bayesian statistics. Their use has recently become very popular in ecology, but they are relatively unknown in geochemistry. Our model was formulated by assuming steady state and first-order reaction kinetics for thorium adsorption and desorption, and for particle aggregation, disaggregation, and remineralization. We adopted a cutoff settling velocity (49 m/d) from Armstrong et al. (2009) to separate particles into fast- and slow-sinking classes. A unique set of parameters with no dependence on prior values was obtained. Adsorption rate constants for both slow- and fast-sinking particles are slightly higher in the upper layer than in the lower layer. Slow-sinking particles have higher adsorption rate constants than fast-sinking particles. Desorption rate constants are higher in the lower layer (slow-sinking particles: 13.17 ± 1.61, fast-sinking particles: 13.96 ± 0.48) than in the upper layer (slow-sinking particles: 7.87 ± 0.60 y-1, fast-sinking particles: 1.81 ± 0.44 y-1). Aggregation rate constants were higher, 1.88 ± 0.04, in the upper layer and just 0.07 ± 0.01 y-1 in the lower layer. Disaggregation rate constants were just 0.30 ± 0.10 y-1 in the upper

  15. Relevance Data for Language Models Using Maximum Likelihood.

    ERIC Educational Resources Information Center

    Bodoff, David; Wu, Bin; Wong, K. Y. Michael

    2003-01-01

    Presents a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval parameters. Discusses similarities to language models; the unification of document-oriented and query-oriented views; tests on data sets; algorithms and scalability; and the effectiveness of maximum likelihood…

  16. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  17. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  18. Maximum Likelihood Estimation of Population Parameters

    PubMed Central

    Fu, Y. X.; Li, W. H.

    1993-01-01

    One of the most important parameters in population genetics is θ = 4N(e)μ where N(e) is the effective population size and μ is the rate of mutation per gene per generation. We study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter θ over existing methods and the other is the estimation of parameter λ which is the ratio of two θ's. The minimum variances of estimates of the parameter θ are derived under two idealized situations. These minimum variances serve as the lower bounds of the variances of all possible estimates of θ in practice. We then show that Watterson's estimate of θ based on the number of segregating sites is asymptotically an optimal estimate of θ. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when θ is large. The maximum likelihood estimate of λ = θ(1)/θ(2) is obtained and the properties of the estimate are discussed. PMID:8375660

  19. Maximum likelihood estimation of population parameters

    SciTech Connect

    Fu, Y.X.; Li, W.H. )

    1993-08-01

    One of the most important parameters in population genetics is [theta] = 4N[sub e][mu] where N[sub e] is the effective population size and [mu] is the rate of mutation per gene per generation. The authors study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter [theta] over existing methods and the other is the estimation of parameter [lambda] which is the ratio of two [theta]'s. The minimum variances serve as the lower bounds of the variances of all possible estimates of [theta] in practice. The authors then show that Watterson's estimate of [theta] based on the number of segregating sites is asymptotically an optimal estimate of [theta]. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when [theta] is large. The maximum likelihood estimate of [lambda] = [theta][sub 1]/[theta][sub 2] is obtained and the properties of the estimate are discussed. 9 refs., 3 figs., 3 tabs.

  20. Maximum likelihood decoding of Reed Solomon Codes

    SciTech Connect

    Sudan, M.

    1996-12-31

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.

  1. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  2. A Maximum-Likelihood Approach for Localizing and Characterizing Special Nuclear Material with a Dual-Particle Imager

    NASA Astrophysics Data System (ADS)

    Polack, John Kyle

    The threat of nuclear warfare is an ongoing global concern. The reduction of this threat is an international effort that is aided by numerous technologies aimed at the detection, localization, and characterization of special nuclear material (SNM). Radiation imaging systems have a distinct advantage over more traditional systems in that they are designed to provide localized information of the observed the radiation field. Accurate characterization of detected sources requires robust reconstruction algorithms with well characterized uncertainties that are capable of delivering consistently reliable information. This work presents a versatile detection system and powerful reconstruction algorithm that, when combined, are well-suited to meeting many present challenges in the nuclear nonproliferation and treaty verification fields. The detection system, known as the Dual-Particle Imager (DPI), is a single device capable of performing both imaging and spectroscopy with photons and fast neutrons. The reconstruction algorithm leverages maximum-likelihood expectation-maximization (MLEM) and a robust system matrix, simulated in MCNPX-PoliMi, to perform image reconstruction and simultaneously unfold localized energy spectra for each pixel in the image. The localized energy spectra make it possible to analyze multiple detected sources simultaneously and the spectrum unfolding capabilities significantly improve the information available for characterizing the sources. Combining this algorithm with the DPI allows for localized spectra to be computed for photons and neutrons, which is particularly useful for the localization and characterization of SNM. The capabilities of this algorithm have been demonstrated through series of experiments and a detailed analysis of the uncertainties associated with the reconstruction process has been performed. The algorithm was used to analyze two complex environments, measured by the DPI, that each contained an SNM sample in a field of

  3. Maximum likelihood for genome phylogeny on gene content.

    PubMed

    Zhang, Hongmei; Gu, Xun

    2004-01-01

    With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.

  4. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  5. Market Penetration of Competing New Technology: A Maximum Likelihood (MLE) Approach to Modeling the Emergence of the Electronic Ballasts

    SciTech Connect

    Fathelrahman, Eihab M. ); Anderson, Dave M. )

    2003-08-20

    Technology is the major driving force of productivity gains and economic growth. Historical studies throughout the last decade attributed about half of economic growth to technological change and the other half to the combined effect of all other driving forces, such as the larger and better-qualified labor force and accumulated stock of capital. V. Peterka (1977) was one of the first to break new ground exploring the mathematical methods for forecasting market shares of competing technologies. Having information about the historical market shares of competing technologies, Peterka described a unique set of algebraic derivations in the MLEST Model. Since Peterka, a large number of studies described the theoretical basis for technology diffusion, however very few provided a real world examples or verified the applicability to the market diffusion theory for energy applications. Objective: The objective of this study is to provide an example of projecting market shares of competing technologies using maximum likelihood estimation (MLE). The application applies to the emergence of the electronic ballast for fluorescent lighting applications. In this example we model the historical competition between the existing technology (magnetic ballasts) and the emerging technology (electronic ballasts). The factors surrounding electronic ballasts as replacement for the magnetic ballasts provide a rich example of competing technologies. The lessons from this example could be used to inform forecasting of many other similar technologies penetrating the market in the U.S. energy sector (e.g. fuel cells, digital information and communication technologies ICTs ...etc). The example will forecast the market shares of magnetic and electronic ballast technologies to 2020 and discuss the energy savings and other benefits. This type of modeling and analysis can help inform the rule-making process for any potential future DOE standards for electronic ballasts. Method: Kennedy, Peter (1993

  6. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    ERIC Educational Resources Information Center

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  7. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  8. Development of an integrated genetic map of a sugarcane (Saccharum spp.) commercial cross, based on a maximum-likelihood approach for estimation of linkage and linkage phases.

    PubMed

    Garcia, A A F; Kido, E A; Meza, A N; Souza, H M B; Pinto, L R; Pastina, M M; Leite, C S; Silva, J A G da; Ulian, E C; Figueira, A; Souza, A P

    2006-01-01

    Sugarcane (Saccharum spp.) is a clonally propagated outcrossing polyploid crop of great importance in tropical agriculture. Up to now, all sugarcane genetic maps had been developed using either full-sib progenies derived from interspecific crosses or from selfing, both approaches not directly adopted in conventional breeding. We have developed a single integrated genetic map using a population derived from a cross between two pre-commercial cultivars ('SP80-180' x 'SP80-4966') using a novel approach based on the simultaneous maximum-likelihood estimation of linkage and linkage phases method specially designed for outcrossing species. From a total of 1,118 single-dose markers (RFLP, SSR and AFLP) identified, 39% derived from a testcross configuration between the parents segregating in a 1:1 fashion, while 61% segregated 3:1, representing heterozygous markers in both parents with the same genotypes. The markers segregating 3:1 were used to establish linkage between the testcross markers. The final map comprised of 357 linked markers, including 57 RFLPs, 64 SSRs and 236 AFLPs that were assigned to 131 co-segregation groups, considering a LOD score of 5, and a recombination fraction of 37.5 cM with map distances estimated by Kosambi function. The co-segregation groups represented a total map length of 2,602.4 cM, with a marker density of 7.3 cM. When the same data were analyzed using JoinMap software, only 217 linked markers were assigned to 98 co-segregation groups, spanning 1,340 cM, with a marker density of 6.2 cM. The maximum-likelihood approach reduced the number of unlinked markers to 761 (68.0%), compared to 901 (80.5%) using JoinMap. All the co-segregation groups obtained using JoinMap were present in the map constructed based on the maximum-likelihood method. Differences on the marker order within the co-segregation groups were observed between the two maps. Based on RFLP and SSR markers, 42 of the 131 co-segregation groups were assembled into 12 putative

  9. Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Lee, Sik-Yum

    1987-01-01

    Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)

  10. Maximum likelihood estimates of polar motion parameters

    NASA Technical Reports Server (NTRS)

    Wilson, Clark R.; Vicente, R. O.

    1990-01-01

    Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.

  11. Maximum likelihood inference of reticulate evolutionary histories

    PubMed Central

    Yu, Yun; Dong, Jianrong; Liu, Kevin J.; Nakhleh, Luay

    2014-01-01

    Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173

  12. Maximum likelihood inference of reticulate evolutionary histories.

    PubMed

    Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay

    2014-11-18

    Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships.

  13. Maximum likelihood Jukes-Cantor triplets: analytic solutions.

    PubMed

    Chor, Benny; Hendy, Michael D; Snir, Sagi

    2006-03-01

    Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.

  14. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    DTIC Science & Technology

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  15. Noncoherent Maximum Likelihood Detection for Differential Spatial Multiplexing MIMO Systems

    NASA Astrophysics Data System (ADS)

    Jia, Ziyan; Yoshii, Katsunobu; Handa, Shiro; Sasamori, Fumihito; Oshita, Shinjiro

    In this paper, we propose a novel noncoherent maximum likelihood detection (NMLD) method for differential spatial multiplexing (SM) multiple-input multiple-output (MIMO) systems. Unlike the conventional maximum likelihood detection (MLD) method which needs the knowledge of channel state information (CSI) at the receiver, NMLD method has no need of CSI at either the transmitter orreceiver. After repartitioning the observation block of multiple-symbol differential detection (MSDD) and following a decision feedback process, the decision metric of NMLD is derived by reforming that of MSDD. Since the maximum Doppler frequency and noise power are included in the derived decision metric, estimations of both maximum Doppler frequency and noise power are needed at the receiver for NMLD. A fast calculation algorithm (FCA) is applied to reduce the computational complexity of NMLD. The feasibility of the proposed NMLD is demonstrated by computer simulations in both slow and fast fading environments. Simulation results show that the proposed NMLD has good bit error rate (BER) performance, approaching that of the conventional coherent MLD with the extension of reference symbols interval. It is also proved that the BER performance is not sensitive to the estimation errors in maximum Doppler frequency and noise power.

  16. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  17. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  18. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  19. A maximum likelihood approach to generate hypotheses on the evolution and historical biogeography in the Lower Volga Valley regions (southwest Russia)

    PubMed Central

    Mavrodiev, Evgeny V; Laktionov, Alexy P; Cellinese, Nico

    2012-01-01

    The evolution of the diverse flora in the Lower Volga Valley (LVV) (southwest Russia) is complex due to the composite geomorphology and tectonic history of the Caspian Sea and adjacent areas. In the absence of phylogenetic studies and temporal information, we implemented a maximum likelihood (ML) approach and stochastic character mapping reconstruction aiming at recovering historical signals from species occurrence data. A taxon-area matrix of 13 floristic areas and 1018 extant species was constructed and analyzed with RAxML and Mesquite. Additionally, we simulated scenarios with numbers of hypothetical extinct taxa from an unknown palaeoflora that occupied the areas before the dramatic transgression and regression events that have occurred from the Pleistocene to the present day. The flora occurring strictly along the river valley and delta appear to be younger than that of adjacent steppes and desert-like regions, regardless of the chronology of transgression and regression events that led to the geomorphological formation of the LVV. This result is also supported when hypothetical extinct taxa are included in the analyses. The history of each species was inferred by using a stochastic character mapping reconstruction method as implemented in Mesquite. Individual histories appear to be independent from one another and have been shaped by repeated dispersal and extinction events. These reconstructions provide testable hypotheses for more in-depth investigations of their population structure and dynamics. PMID:22957179

  20. Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies.

    PubMed

    Schuler, Megan S; Rose, Sherri

    2017-01-01

    Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings.

  1. Weibull distribution based on maximum likelihood with interval inspection data

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.

    1985-01-01

    The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.

  2. Maximum Likelihood Factor Structure of the Family Environment Scale.

    ERIC Educational Resources Information Center

    Fowler, Patrick C.

    1981-01-01

    Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)

  3. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  4. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  5. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  6. Properties of maximum likelihood male fertility estimation in plant populations.

    PubMed Central

    Morgan, M T

    1998-01-01

    Computer simulations are used to evaluate maximum likelihood methods for inferring male fertility in plant populations. The maximum likelihood method can provide substantial power to characterize male fertilities at the population level. Results emphasize, however, the importance of adequate experimental design and evaluation of fertility estimates, as well as limitations to inference (e.g., about the variance in male fertility or the correlation between fertility and phenotypic trait value) that can be reasonably drawn. PMID:9611217

  7. Effects of parameter estimation on maximum-likelihood bootstrap analysis.

    PubMed

    Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack

    2010-08-01

    Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Penalized maximum-likelihood image reconstruction for lesion detection

    NASA Astrophysics Data System (ADS)

    Qi, Jinyi; Huesman, Ronald H.

    2006-08-01

    Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-likelihood image reconstruction for this important clinical task. Compared to analytical reconstruction methods, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-likelihood image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.

  9. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  10. Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.

    ERIC Educational Resources Information Center

    Harwell, Michael R.; And Others

    1988-01-01

    The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)

  11. Investigating bias in maximum-likelihood quantum-state tomography

    NASA Astrophysics Data System (ADS)

    Silva, G. B.; Glancy, S.; Vasconcelos, H. M.

    2017-02-01

    Maximum-likelihood quantum-state tomography yields estimators that are consistent, provided that the likelihood model is correct, but the maximum-likelihood estimators may have bias for any finite data set. The bias of an estimator is the difference between the expected value of the estimate and the true value of the parameter being estimated. This paper investigates bias in the widely used maximum-likelihood quantum-state tomography. Our goal is to understand how the amount of bias depends on factors such as the purity of the true state, the number of measurements performed, and the number of different bases in which the system is measured. For this, we perform numerical experiments that simulate optical homodyne tomography of squeezed thermal states under various conditions, perform tomography, and estimate bias in the purity of the estimated state. We find that estimates of higher purity states exhibit considerable bias, such that the estimates have lower purities than the true states.

  12. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  13. Bayesian and maximum likelihood estimation of hierarchical response time models

    PubMed Central

    Farrell, Simon; Ludwig, Casimir

    2008-01-01

    Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. We consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. Single-level estimation is compared with hierarchical estimation of parameters of the ex-Gaussian distribution. Additionally, for each approach maximum likelihood (ML) estimation is compared with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian by reducing the variability in recovered parameters. At each level, little overall difference was observed between the ML and Bayesian methods. PMID:19001592

  14. Assessing allelic dropout and genotype reliability using maximum likelihood.

    PubMed Central

    Miller, Craig R; Joyce, Paul; Waits, Lisette P

    2002-01-01

    A growing number of population genetic studies utilize nuclear DNA microsatellite data from museum specimens and noninvasive sources. Genotyping errors are elevated in these low quantity DNA sources, potentially compromising the power and accuracy of the data. The most conservative method for addressing this problem is effective, but requires extensive replication of individual genotypes. In search of a more efficient method, we developed a maximum-likelihood approach that minimizes errors by estimating genotype reliability and strategically directing replication at loci most likely to harbor errors. The model assumes that false and contaminant alleles can be removed from the dataset and that the allelic dropout rate is even across loci. Simulations demonstrate that the proposed method marks a vast improvement in efficiency while maintaining accuracy. When allelic dropout rates are low (0-30%), the reduction in the number of PCR replicates is typically 40-50%. The model is robust to moderate violations of the even dropout rate assumption. For datasets that contain false and contaminant alleles, a replication strategy is proposed. Our current model addresses only allelic dropout, the most prevalent source of genotyping error. However, the developed likelihood framework can incorporate additional error-generating processes as they become more clearly understood. PMID:11805071

  15. Parameter estimation in X-ray astronomy using maximum likelihood

    NASA Technical Reports Server (NTRS)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  16. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  17. Maximum likelihood classification of synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L. S.

    1985-01-01

    Classification of synthetic aperture radar (SAR) images has important applications in geology, agriculture, and the military. A statistical model for SAR images is reviewed and a maximum likelihood classification algorithm developed for the classification of agricultural fields based on the model. It is first assumed that the target feature information is known a priori. The performance of the algorithm is then evaluated in terms of the probability of incorrect classification. A technique is also presented to extract the needed feature information from a SAR image; then both the feature extraction and the maximum likelihood classification algorithms are tested on a SEASAT-A SAR image.

  18. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    PubMed

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  19. Maximum-likelihood estimation of recent shared ancestry (ERSA)

    PubMed Central

    Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W. Scott; Zhang, Yuhua; Tuohy, Therese M.; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L.; Woodward, Scott R.; Jorde, Lynn B.

    2011-01-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package. PMID:21324875

  20. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  1. Maximum Likelihood Detection of Electro-Optic Moving Targets

    DTIC Science & Technology

    1992-01-16

    The description of a maximum likelihood algorithm to detect moving targets in electro - optic data is presented. The algorithm is based on processing...optimum algorithm to determine the performance loss. A processing architecture concept is also described. Electro - optic sensor, detection, infrared sensor, moving target, binary integration, velocity filter.

  2. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  3. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems.

    PubMed

    Daigle, Bernie J; Roh, Min K; Petzold, Linda R; Niemi, Jarad

    2012-05-01

    A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs). MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM(2)): an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM) algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM(2) substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods. This work provides a novel, accelerated version

  4. Cases In Which Ancestral Maximum Likelihood Will Be Confusingly Misleading.

    PubMed

    Handelman, Tomer; Chor, Benny

    2017-03-02

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mosel, Roch, and Steel (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion.

  5. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  6. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  7. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  8. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  9. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  10. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue.

  11. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  12. Maximum likelihood pedigree reconstruction using integer linear programming.

    PubMed

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.

  13. Maximum likelihood estimation of population divergence times and population phylogenies under the infinite sites model.

    PubMed

    Nielsen, R

    1998-04-01

    In this paper, a maximum likelihood estimator of population divergence time based on the infinite sites model is developed. It is demonstrated how this estimator may be applied to obtain maximum likelihood estimates of the topology of population phylogenies. This approach addresses several classical problems occurring in the inference of the phylogenetic relationship of populations, most notably the problem of shared ancestral polymorphisms. The method is applied to previously published data sets of human African populations and of Caribbean hawksbill turtles.

  14. Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution

    SciTech Connect

    Bowman, Kimiko o

    2007-01-01

    The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.

  15. Use of historical information in a maximum-likelihood framework

    USGS Publications Warehouse

    Cohn, T.A.; Stedinger, J.R.

    1987-01-01

    This paper discusses flood-quantile estimators which can employ historical and paleoflood information, both when the magnitudes of historical flood peaks are known, and when only threshold-exceedance information is available. Maximum likelihood, quasi-maximum likelihood and curve fitting methods for simultaneous estimation of 1, 2 and 3 unknown parameters are examined. The information contained in a 100 yr record of historical observations, during which the flood perception threshold was near the 10 yr flood level (i.e., on average, one flood in ten is above the threshold and hence is recorded), is equivalent to roughly 43, 64 and 78 years of systematic record in terms of the improvement of the precision of 100 yr flood estimators when estimating 1, 2 and 3 parameters, respectively. With the perception threshold at the 100 yr flood level, the historical data was worth 13, 20 and 46 years of systematic data when estimating 1, 2 and 3 parameters, respectively. ?? 1987.

  16. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  17. Maximum Marginal Likelihood Estimation for Semiparametric Item Analysis.

    ERIC Educational Resources Information Center

    Ramsay, J. O.; Winsberg, S.

    1991-01-01

    A method is presented for estimating the item characteristic curve (ICC) using polynomial regression splines. Estimation of spline ICCs is described by maximizing the marginal likelihood formed by integrating ability over a beta prior distribution. Simulation results compare this approach with the joint estimation of ability and item parameters.…

  18. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    NASA Technical Reports Server (NTRS)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  19. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    NASA Technical Reports Server (NTRS)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  20. Fast maximum likelihood joint estimation of frequency and frequency rate

    NASA Astrophysics Data System (ADS)

    Abatzoglou, Theagenis J.

    1986-11-01

    A fast maximum likelihood algorithm is presented that jointly estimates the frequency and frequency rate of a sinusoid corrupted by additive Gaussian white noise. It consists of a coarse search and a fine search. First, the two-dimensional frequency-frequency rate plane is subdivided into parallelograms whose size depends on the region of convergence of Newton's method used in maximizing the log-likelihood function (LLF). The size of the parallelogram is explicitly computed and is optimal for the method used. The coarse search consists of maximizing the LLF over the vertices of the parallelograms. Then, starting at the vertex where the LLF attained its maximum, a two-dimensional Newton's method to find the absolute maximum of the LLF is implemented. This last step consists of the fine search. The rate of convergence of Newton's method is cubic, and is extremely fast. Furthermore, Newton's method will converge after two iterations when the starting point used in the method lies within 75 percent of the distances defined by the parallelogram of convergence whose center coincides with the true values of frequency and frequency rate. In this case, the rms errors for frequency and frequency rate are practically equal to the Cramer-Rao bound at all signal-to-noise ratio of equal to or greater than 15 dB. The frequency-frequency rate ambiguity function is shown to be even, and its periodicities are extracted.

  1. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  2. Maximum-likelihood analysis of the COBE angular correlation function

    NASA Technical Reports Server (NTRS)

    Seljak, Uros; Bertschinger, Edmund

    1993-01-01

    We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.

  3. CCD data processor for maximum likelihood feature classification

    NASA Technical Reports Server (NTRS)

    Benz, H. F.; Kelly, W. L.; Husson, C.; Culotta, P. W.; Snyder, W. E.

    1980-01-01

    The paper describes an advanced technology development which utilizes a high speed analog/binary CCD correlator to perform the matrix multiplications necessary to implement onboard feature classification. The matrix manipulation module uses the maximum likelihood classification algorithm assuming a Gaussian probability density function. The module will process 16 element multispectral vectors at rates in excess of 500 thousand multispectral vector elements per second. System design considerations for the optimum use of this module are discussed, test results from initial device fabrication runs are presented, and the performance in typical processing applications is described

  4. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  5. A 3D approximate maximum likelihood localization solver

    SciTech Connect

    2016-09-23

    A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  6. Precision of maximum likelihood estimation in adaptive designs.

    PubMed

    Graf, Alexandra Christine; Gutjahr, Georg; Brannath, Werner

    2016-03-15

    There has been increasing interest in trials that allow for design adaptations like sample size reassessment or treatment selection at an interim analysis. Ignoring the adaptive and multiplicity issues in such designs leads to an inflation of the type 1 error rate, and treatment effect estimates based on the maximum likelihood principle become biased. Whereas the methodological issues concerning hypothesis testing are well understood, it is not clear how to deal with parameter estimation in designs were adaptation rules are not fixed in advanced so that, in practice, the maximum likelihood estimate (MLE) is used. It is therefore important to understand the behavior of the MLE in such designs. The investigation of Bias and mean squared error (MSE) is complicated by the fact that the adaptation rules need not be fully specified in advance and, hence, are usually unknown. To investigate Bias and MSE under such circumstances, we search for the sample size reassessment and selection rules that lead to the maximum Bias or maximum MSE. Generally, this leads to an overestimation of Bias and MSE, which can be reduced by imposing realistic constraints on the rules like, for example, a maximum sample size. We consider designs that start with k treatment groups and a common control and where selection of a single treatment and control is performed at the interim analysis with the possibility to reassess each of the sample sizes. We consider the case of unlimited sample size reassessments as well as several realistically restricted sample size reassessment rules. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Maximum likelihood tuning of a vehicle motion filter

    NASA Technical Reports Server (NTRS)

    Trankle, Thomas L.; Rabin, Uri H.

    1990-01-01

    This paper describes the use of maximum likelihood parameter estimation unknown parameters appearing in a nonlinear vehicle motion filter. The filter uses the kinematic equations of motion of a rigid body in motion over a spherical earth. The nine states of the filter represent vehicle velocity, attitude, and position. The inputs to the filter are three components of translational acceleration and three components of angular rate. Measurements used to update states include air data, altitude, position, and attitude. Expressions are derived for the elements of filter matrices needed to use air data in a body-fixed frame with filter states expressed in a geographic frame. An expression for the likelihood functions of the data is given, along with accurate approximations for the function's gradient and Hessian with respect to unknown parameters. These are used by a numerical quasi-Newton algorithm for maximizing the likelihood function of the data in order to estimate the unknown parameters. The parameter estimation algorithm is useful for processing data from aircraft flight tests or for tuning inertial navigation systems.

  8. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  9. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  10. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    PubMed Central

    Spackman, K. A.

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606

  11. Maximum Likelihood Estimation for Multiple Camera Target Tracking on Grassmann Tangent Subspace.

    PubMed

    Amini-Omam, Mojtaba; Torkamani-Azar, Farah; Ghorashi, Seyed Ali

    2016-11-15

    In this paper, we introduce a likelihood model for tracking the location of object in multiple view systems. Our proposed model transforms conventional nonlinear Euclidean estimation model to an estimation model based on the manifold tangent subspace. In this paper, we show that by decomposition of input noise into two parts and description of model by exponential map, real observations in the Euclidean geometry can be transformed to the manifold tangent subspace. Moreover, by obtained tangent subspace likelihood function, we propose two iterative and noniterative maximum likelihood estimation approaches which numerical results show their good performance.

  12. Pattern recognition using maximum likelihood estimation and orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Islam, M. M.; Alam, M. S.

    2006-08-01

    Hyperspectral sensor imagery (HSI) is a relatively new area of research, however, it is extensively being used in geology, agriculture, defense, intelligence and law enforcement applications. Much of the current research focuses on the object detection with low false alarm rate. Over the past several years, many object detection algorithms have been developed which include linear detector, quadratic detector, adaptive matched filter etc. In those methods the available data cube was directly used to determine the background mean and the covariance matrix, assuming that the number of object pixels is low compared to that of the data pixels. In this paper, we have used the orthogonal subspace projection (OSP) technique to find the background matrix from the given image data. Our algorithm consists of three parts. In the first part, we have calculated the background matrix using the OSP technique. In the second part, we have determined the maximum likelihood estimates of the parameters. Finally, we have considered the likelihood ratio, commonly known as the Neyman Pearson quadratic detector, to recognize the objects. The proposed technique has been investigated via computer simulation where excellent performance has been observed.

  13. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    SciTech Connect

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  14. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions.

    PubMed

    Park, Yongseok; Taylor, Jeremy M G; Kalbfleisch, John D

    2012-06-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method.

  15. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  16. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  17. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    SciTech Connect

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  18. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  19. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  20. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  1. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Treesearch

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  2. An Alternative Estimator for the Maximum Likelihood Estimator for the Two Extreme Response Patterns.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In the methods and approaches developed for estimating the operating characteristics of the discrete item responses, the maximum likelihood estimate of the examinee based upon the "Old Test" has an important role. When Old Test does not provide a sufficient amount of test information for the upper and lower part of the ability interval,…

  3. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  4. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  5. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  6. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  7. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  8. Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.

    PubMed

    Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng

    2017-04-17

    Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and

  9. Pedestrian cue detection: colour inverse maximum likelihood ratio

    NASA Astrophysics Data System (ADS)

    Braik, Malik; Pycock, David

    2013-12-01

    This paper presents an adaptable method for identifying pedestrian cues. Cue detection is investigated for adults in isolation and groups. The aim is to detect a single cue for each pedestrian. Colour Inverse Maximum Likelihood Ratio (IMLR) criteria are employed to distinguish object and background regions using a mask designed to accommodate a wide range of appearances. The adaptability and specificity of the method is demonstrated using images containing trees and street furniture; structures that are often confused with pedestrians by computer vision systems. Test images of low contrast are also included to assess the sensitivity of the cue detection process. Evaluation with over 250 images gives a false positive error rate of 10% and a false negative error rate of 1.5% % under exacting detection criteria with a complexity of where n is the number of image points considered. The speed of execution is 8 mS per frame for images of 640 by 480 pixels on an Intel core i3-2310MTM CPU running at 2.10GHz with 4.00GB RAM.

  10. Maximum likelihood sequence estimation for optical complex direct modulation.

    PubMed

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  11. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  12. Single particle maximum likelihood reconstruction from superresolution microscopy images

    PubMed Central

    Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin

    2017-01-01

    Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum likelihood reconstruction (MLR) method perfectly suited to the stochastic nature of such superresolution imaging. We validate this method by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a method that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation. PMID:28253349

  13. Maximum likelihood estimation in meta-analytic structural equation modeling.

    PubMed

    Oort, Frans J; Jak, Suzanne

    2016-06-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical properties is the two-stage structural equation modeling, in which maximum likelihood analysis is used to estimate the common correlation matrix in the first stage, and weighted least squares analysis is used to fit structural equation models to the common correlation matrix in the second stage. In the present paper, we propose an alternative method, ML MASEM, that uses ML estimation throughout. In a simulation study, we use both methods and compare chi-square distributions, bias in parameter estimates, false positive rates, and true positive rates. Both methods appear to yield unbiased parameter estimates and false and true positive rates that are close to the expected values. ML MASEM parameter estimates are found to be significantly less bias than two-stage structural equation modeling estimates, but the differences are very small. The choice between the two methods may therefore be based on other fundamental or practical arguments. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Single particle maximum likelihood reconstruction from superresolution microscopy images.

    PubMed

    Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin

    2017-01-01

    Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum likelihood reconstruction (MLR) method perfectly suited to the stochastic nature of such superresolution imaging. We validate this method by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a method that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation.

  15. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    DOE PAGES

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less

  16. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L.; DuFrain, R.J.

    1986-03-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  17. Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise.

    PubMed

    Dessimoz, Christophe; Gil, Manuel

    2008-06-23

    The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML) on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA). Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity). Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.

  18. Partial order optimum likelihood (POOL): maximum likelihood prediction of protein active site residues using 3D Structure and sequence properties.

    PubMed

    Tong, Wenxu; Wei, Ying; Murga, Leonel F; Ondrechen, Mary Jo; Williams, Ronald J

    2009-01-01

    A new monotonicity-constrained maximum likelihood approach, called Partial Order Optimum Likelihood (POOL), is presented and applied to the problem of functional site prediction in protein 3D structures, an important current challenge in genomics. The input consists of electrostatic and geometric properties derived from the 3D structure of the query protein alone. Sequence-based conservation information, where available, may also be incorporated. Electrostatics features from THEMATICS are combined with multidimensional isotonic regression to form maximum likelihood estimates of probabilities that specific residues belong to an active site. This allows likelihood ranking of all ionizable residues in a given protein based on THEMATICS features. The corresponding ROC curves and statistical significance tests demonstrate that this method outperforms prior THEMATICS-based methods, which in turn have been shown previously to outperform other 3D-structure-based methods for identifying active site residues. Then it is shown that the addition of one simple geometric property, the size rank of the cleft in which a given residue is contained, yields improved performance. Extension of the method to include predictions of non-ionizable residues is achieved through the introduction of environment variables. This extension results in even better performance than THEMATICS alone and constitutes to date the best functional site predictor based on 3D structure only, achieving nearly the same level of performance as methods that use both 3D structure and sequence alignment data. Finally, the method also easily incorporates such sequence alignment data, and when this information is included, the resulting method is shown to outperform the best current methods using any combination of sequence alignments and 3D structures. Included is an analysis demonstrating that when THEMATICS features, cleft size rank, and alignment-based conservation scores are used individually or in combination

  19. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  20. Maximum likelihood random galaxy catalogues and luminosity function estimation

    NASA Astrophysics Data System (ADS)

    Cole, Shaun

    2011-09-01

    We present a new algorithm to generate a random (unclustered) version of an magnitude limited observational galaxy redshift catalogue. It takes into account both galaxy evolution and the perturbing effects of large-scale structure. The key to the algorithm is a maximum likelihood (ML) method for jointly estimating both the luminosity function (LF) and the overdensity as a function of redshift. The random catalogue algorithm then works by cloning each galaxy in the original catalogue, with the number of clones determined by the ML solution. Each of these cloned galaxies is then assigned a random redshift uniformly distributed over the accessible survey volume, taking account of the survey magnitude limit(s) and, optionally, both luminosity and number density evolution. The resulting random catalogues, which can be employed in traditional estimates of galaxy clustering, make fuller use of the information available in the original catalogue and hence are superior to simply fitting a functional form to the observed redshift distribution. They are particularly well suited to studies of the dependence of galaxy clustering on galaxy properties as each galaxy in the random catalogue has the same list of attributes as measured for the galaxies in the genuine catalogue. The derivation of the joint overdensity and LF estimator reveals the limit in which the ML estimate reduces to the standard 1/Vmax LF estimate, namely when one makes the prior assumption that the are no fluctuations in the radial overdensity. The new ML estimator can be viewed as a generalization of the 1/Vmax estimate in which Vmax is replaced by a density corrected Vdc, max.

  1. Maximum-likelihood estimation of haplotype frequencies in nuclear families.

    PubMed

    Becker, Tim; Knapp, Michael

    2004-07-01

    The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.

  2. Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2014-11-01

    This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation.

  3. The Multi-Mission Maximum Likelihood Framework (3ML)

    NASA Astrophysics Data System (ADS)

    Burgess, J. M.; Vianello, G.

    2016-10-01

    We introduce a new tool for multi-messenger astronomy capable of fitting data from multiple instruments properly via the use of independent likelihood plugins. 3ML represents a step forward in spectral and spatial analysis across all wavelengths.

  4. The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.

  5. Maximum-likelihood estimation of gene location by linkage disequilibrium

    SciTech Connect

    Hill, W.G. ); Weir, B.S. )

    1994-04-01

    Linkage disequilibrium, D, between a polymorphic disease and mapped markers can, in principle, be used to help find the map position of the disease gene. Likelihoods are therefore derived for the value of D conditional on the observed number of haplotypes in the sample and on the population parameter Nc, where N is the effective population size and c the recombination fraction between the disease and marker loci. The likelihood is computed explicitly for the case of two loci with heterozygote superiority and, more generally, by computer simulations assuming a steady state of constant population size and selective pressures or neutrality. It is found that the likelihood is, in general, not very dependent on the degree of selection at the loci and is very flat. This suggests that precise information on map position will not be obtained from estimates of linkage disequilibrium. 15 refs., 5 figs., 21 tabs.

  6. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  7. Maximum-likelihood estimation of admixture proportions from genetic data.

    PubMed Central

    Wang, Jinliang

    2003-01-01

    For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794

  8. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  9. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  10. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  11. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  12. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    SciTech Connect

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  13. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  14. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  15. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  16. Maximum likelihood positioning and energy correction for scintillation detectors.

    PubMed

    Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-21

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30 × 30 scintillator pixel array with an 8 × 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  17. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

    SciTech Connect

    Bowman, Kimiko o; Shenton, LR

    2006-01-01

    The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

  18. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    PubMed Central

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  19. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  20. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  1. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  2. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  3. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  4. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  5. A maximum likelihood estimation framework for delay logistic differential equation model

    NASA Astrophysics Data System (ADS)

    Mahmoud, Ahmed Adly; Dass, Sarat Chandra; Muthuvalu, Mohana S.

    2016-11-01

    This paper will introduce the maximum likelihood method of estimation for delay differential equation model governed by unknown delay and other parameters of interest followed by a numerical solver approach. As an example we consider the delayed logistic differential equation. A grid based estimation framework is proposed. Our methodology estimates correctly the delay parameter as well as the initial starting value of the dynamical system based on simulation data. The computations have been carried out with help of mathematical software: MATLAB® 8.0 R2012b.

  6. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  7. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  8. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  9. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    1991-01-01

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  10. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  11. On the Relationships between Sum Score Based Estimation and Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    del Pino, Guido; San Martin, Ernesto; Gonzalez, Jorge; De Boeck, Paul

    2008-01-01

    This paper analyzes the sum score based (SSB) formulation of the Rasch model, where items and sum scores of persons are considered as factors in a logit model. After reviewing the evolution leading to the equality between their maximum likelihood estimates, the SSB model is then discussed from the point of view of pseudo-likelihood and of…

  12. Marginal Maximum Likelihood Estimation for the One-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Thissen, David

    1982-01-01

    Two algorithms for marginal maximum likelihood estimation for the Rasch model are provided. The more efficient of the two algorithms is extended to estimation for the linear logistic model. Numerical examples of both procedures are presented. (Author/JKS)

  13. A Primer on Maximum Likelihood Algorithms Available for Use with Missing Data. Teacher's Corner.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2001-01-01

    Provides a comprehensive, nontechnical overview of the three maximum likelihood algorithms available for use with missing data and discusses multiple imputation, frequently used in conjunction with the EM algorithm. (SLD)

  14. Maximum Log-Likelihood Method with Weighting Penalty for CRISM Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    He, L.; Arvidson, R. E.; O'Sullivan, J. A.; Politte, D. V.

    2017-06-01

    We develop the Maximum Log-likelihood Method with weighting penalties to denoise and reconstruct the CRISM data. Spatial and spectral weights determined by sampling degrees and noise levels are introduced to remove spatial and spectral artifacts.

  15. Adaptive Trellis Using Interim Maximum-Likelihood Detector Output for a Holographic Storage System

    NASA Astrophysics Data System (ADS)

    Kim, Gukhui; Kim, Jinyoung; Lee, Jaejin

    2011-09-01

    The performance of partial response maximum likelihood (PRML) for holographic data storage can be reduced by asymmetric channel characteristics such as radial/tangential tilts. Therefore, we proposed the adaptive trellis scheme to improve performance. The proposed algorithm updates the reference branch values through using interim maximum-likelihood detector output, using the changed channel condition. Thus, this system guarantees better bit error rate performance than conventional PRML detection.

  16. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  17. Using maximum likelihood to estimate population size from temporal changes in allele frequencies.

    PubMed Central

    Williamson, E G; Slatkin, M

    1999-01-01

    We develop a maximum-likelihood framework for using temporal changes in allele frequencies to estimate the number of breeding individuals in a population. We use simulations to compare the performance of this estimator to an F-statistic estimator of variance effective population size. The maximum-likelihood estimator had a lower variance and smaller bias. Taking advantage of the likelihood framework, we extend the model to include exponential growth and show that temporal allele frequency data from three or more sampling events can be used to test for population growth. PMID:10353915

  18. SAR automatic target recognition using maximum likelihood template-based classifiers

    NASA Astrophysics Data System (ADS)

    Saghri, John A.

    2008-08-01

    A review of several recently-developed maximum likelihood template-based automatic target recognition (ATR) algorithms for extended targets in synthetic aperture radar (SAR) imagery data is presented. The algorithms are based on 'gradient' peaks, 'ceiling' peaks, edges, corners, shadows, and rectangular-fits. A weight-based Bayesian maximum likelihood scheme to combine multiple template-based classifiers is presented. The feature weights are derived from prior recognition accuracies, i.e., confidence levels, achieved by individual template-based classifiers. Application of feature-based weights instead of target specific feature-based weights reduces the resulting ATR accuracy by only a small amount. Preliminary results indicate that (1) the ceiling peaks provide the most target-discriminating power, (2) inclusion of more target-discriminating features leads to higher classification accuracy. Dempster-Shaffer rule of combination is suggested as a potential alternative to the implemented Bayesian decision theory approach to resolve conflicting reports from multiple template-based classifiers.

  19. Carrier Recovery Enhancement for Maximum-Likelihood Doppler Shift Estimation in Mars Exploration Missions

    NASA Astrophysics Data System (ADS)

    Cattivelli, Federico S.; Estabrook, Polly; Satorius, Edgar H.; Sayed, Ali H.

    2008-11-01

    One of the most crucial stages of the Mars exploration missions is the entry, descent, and landing (EDL) phase. During EDL, maintaining reliable communication from the spacecraft to Earth is extremely important for the success of future missions, especially in case of mission failure. EDL is characterized by very deep accelerations, caused by friction, parachute deployment and rocket firing among others. These dynamics cause a severe Doppler shift on the carrier communications link to Earth. Methods have been proposed to estimate the Doppler shift based on Maximum Likelihood. So far these methods have proved successful, but it is expected that the next Mars mission, known as the Mars Science Laboratory, will suffer from higher dynamics and lower SNR. Thus, improving the existing estimation methods becomes a necessity. We propose a Maximum Likelihood approach that takes into account the power in the data tones to enhance carrier recovery, and improve the estimation performance by up to 3 dB. Simulations are performed using real data obtained during the EDL stage of the Mars Exploration Rover B (MERB) mission.

  20. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  1. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  2. Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.

    PubMed

    Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man

    2009-10-01

    In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.

  3. Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates

    PubMed Central

    Chen, Ming-Hui; Ibrahim, Joseph G.; Shao, Qi-Man

    2009-01-01

    In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology. PMID:19802375

  4. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  6. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  7. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  8. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  9. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  10. Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method

    NASA Astrophysics Data System (ADS)

    Bayrak, Özlem Türker; Akkaya, Aysen D.

    2010-02-01

    We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.

  11. Change point models for cognitive tests using semi-parametric maximum likelihood

    PubMed Central

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E.

    2013-01-01

    Random-effects change point models are formulated for longitudinal data obtained from cognitive tests. The conditional distribution of the response variable in a change point model is often assumed to be normal even if the response variable is discrete and shows ceiling effects. For the sum score of a cognitive test, the binomial and the beta-binomial distributions are presented as alternatives to the normal distribution. Smooth shapes for the change point models are imposed. Estimation is by marginal maximum likelihood where a parametric population distribution for the random change point is combined with a non-parametric mixing distribution for other random effects. An extension to latent class modelling is possible in case some individuals do not experience a change in cognitive ability. The approach is illustrated using data from a longitudinal study of Swedish octogenarians and nonagenarians that began in 1991. Change point models are applied to investigate cognitive change in the years before death. PMID:23471297

  12. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  13. Parallel computation of a maximum-likelihood estimator of a physical map.

    PubMed Central

    Bhandarkar, S M; Machaka, S A; Shete, S S; Kota, R N

    2001-01-01

    Reconstructing a physical map of a chromosome from a genomic library presents a central computational problem in genetics. Physical map reconstruction in the presence of errors is a problem of high computational complexity that provides the motivation for parallel computing. Parallelization strategies for a maximum-likelihood estimation-based approach to physical map reconstruction are presented. The estimation procedure entails a gradient descent search for determining the optimal spacings between probes for a given probe ordering. The optimal probe ordering is determined using a stochastic optimization algorithm such as simulated annealing or microcanonical annealing. A two-level parallelization strategy is proposed wherein the gradient descent search is parallelized at the lower level and the stochastic optimization algorithm is simultaneously parallelized at the higher level. Implementation and experimental results on a distributed-memory multiprocessor cluster running the parallel virtual machine (PVM) environment are presented using simulated and real hybridization data. PMID:11238392

  14. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimator is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.

  15. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGES

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  16. Multi-energy penalized maximum-likelihood reconstruction for x-ray security imaging

    NASA Astrophysics Data System (ADS)

    Politte, David G.; Lu, Jingwei; O'Sullivan, Joseph A.; Johnson, Eric; Bosch, Carl

    2017-05-01

    X-ray imaging for security screening is a challenging application that requires simultaneous satisfaction of seemingly incompatible constraints, including low cost, high throughput, and reliable detection of threats. We take a principled computational imaging approach to system design. Mathematical models of the underlying physics and a Huber-class penalty function yield a penalized maximum-likelihood problem. We extend our iterative algorithm for computing linear attenuation coefficients to use multiple energy bins in the SureScan x1000, which has an unconventional, fixed-source geometry. The goal is to maintain the spatial resolution of the single-energy reconstruction while providing information for material characterization used for detection of threats.

  17. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  18. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  19. Recent developments in maximum likelihood estimation of MTMM models for categorical data.

    PubMed

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed.

  20. A likelihood approach to calculating risk support intervals

    SciTech Connect

    Leal, S.M.; Ott, J. )

    1994-05-01

    Genetic risks are usually computed under the assumption that genetic parameters, such as the recombination fraction, are known without error. Uncertainty in the estimates of these parameters must translate into uncertainty regarding the risk. To allow for uncertainties in parameter values, one may employ Bayesian techniques or, in a maximum-likelihood framework, construct a support interval (SI) for the risk. Here the authors have implemented the latter approach. The SI for the risk is based on the SIs of parameters involved in the pedigree likelihood. As an empirical example, the SI for the risk was calculated for probands who are members of chronic spinal muscular atrophy kindreds. In order to evaluate the accuracy of a risk in genetic counseling situations, the authors advocate that, in addition to a point estimate, an SI for the risk should be calculated. 16 refs., 1 fig., 1 tab.

  1. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  2. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  3. Estimation of bias errors in measured airplane responses using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  4. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    SciTech Connect

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  5. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures.

    PubMed

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of Rfree or may leave it out completely.

  6. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    PubMed Central

    Pražnikar, Jure; Turk, Dušan

    2014-01-01

    The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R free or may leave it out completely. PMID:25478831

  7. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  8. A multinomial maximum likelihood program /MUNOML/. [in modeling sensory and decision phenomena

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    A multinomial maximum likelihood program (MUNOML) for signal detection and for behavior models is discussed. It is found to be useful in day to day operation since it provides maximum flexibility with minimum duplicated effort. It has excellent convergence qualities and rarely goes beyond 10 iterations. A library of subroutines is being collected for use with MUNOML, including subroutines for a successive categories model and for signal detectability models.

  9. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

  10. Recovery of Weak Common Factors by Maximum Likelihood and Ordinary Least Squares Estimation.

    ERIC Educational Resources Information Center

    Briggs, Nancy E.; MacCallum, Robert C.

    2003-01-01

    Examined the relative performance of two commonly used methods of parameter estimation in factor analysis, maximum likelihood (ML) and ordinary least squares (OLS) through simulation. In situations with a moderate amount of error, ML often failed to recover the weak factor while OLS succeeded. Also presented an example using empirical data. (SLD)

  11. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  12. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  13. The Relative Performance of Full Information Maximum Likelihood Estimation for Missing Data in Structural Equation Models.

    ERIC Educational Resources Information Center

    Enders, Craig K.; Bandalos, Deborah L.

    2001-01-01

    Used Monte Carlo simulation to examine the performance of four missing data methods in structural equation models: (1)full information maximum likelihood (FIML); (2) listwise deletion; (3) pairwise deletion; and (4) similar response pattern imputation. Results show that FIML estimation is superior across all conditions of the design. (SLD)

  14. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  15. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  16. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  17. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  18. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  19. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  20. A Joint Maximum Likelihood Estimation Procedure for the Hyperbolic Cosine Model for Single-Stimulus Responses.

    ERIC Educational Resources Information Center

    Luo, Guanzhong

    2000-01-01

    Extends joint maximum likelihood estimation for the hyperbolic cosine model to the situation in which the units of items are allowed to vary. Describes the four estimation cycles designed to address four important issues of model development and presents results from two sets of simulation studies that show reasonably accurate parameter recovery…

  1. A New Maximum Likelihood Estimator for the Population Squared Multiple Correlation.

    ERIC Educational Resources Information Center

    Alf, Edward F., Jr.; Graf, Richard G.

    2002-01-01

    Developed a new estimator for the population squared multiple correlation using maximum likelihood estimation. Data from 72 air control school graduates demonstrate that the new estimator has greater accuracy than other estimators with values that fall within the parameter space. (SLD)

  2. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  3. The Bias Function of the Maximum Likelihood Estimate of Ability for the Dichotomous Response Level.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1993-01-01

    F. Samejima's approximation for the bias function for the maximum likelihood estimate of the latent trait in the general case where item responses are discrete is explored. Observations are made about the behavior of this bias function for the dichotomous response level in general. Empirical examples are given. (SLD)

  4. A Method of Estimating Item Characteristic Functions Using the Maximum Likelihood Estimate of Ability

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)

  5. A Maximum Likelihood Method for Latent Class Regression Involving a Censored Dependent Variable.

    ERIC Educational Resources Information Center

    Jedidi, Kamel; And Others

    1993-01-01

    A method is proposed to simultaneously estimate regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)

  6. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  7. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  8. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  9. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  10. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  11. A Maximum Likelihood Method for Latent Class Regression Involving a Censored Dependent Variable.

    ERIC Educational Resources Information Center

    Jedidi, Kamel; And Others

    1993-01-01

    A method is proposed to simultaneously estimate regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)

  12. The Relative Performance of Full Information Maximum Likelihood Estimation for Missing Data in Structural Equation Models.

    ERIC Educational Resources Information Center

    Enders, Craig K.; Bandalos, Deborah L.

    2001-01-01

    Used Monte Carlo simulation to examine the performance of four missing data methods in structural equation models: (1)full information maximum likelihood (FIML); (2) listwise deletion; (3) pairwise deletion; and (4) similar response pattern imputation. Results show that FIML estimation is superior across all conditions of the design. (SLD)

  13. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  14. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  15. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  16. Comparison of Maximum Likelihood and Pearson Chi-Square Statistics for Assessing Latent Class Models.

    ERIC Educational Resources Information Center

    Holt, Judith A.; Macready, George B.

    When latent class parameters are estimated, maximum likelihood and Pearson chi-square statistics can be derived for assessing the fit of the model to the data. This study used simulated data to compare these two statistics, and is based on mixtures of latent binomial distributions, using data generated from five dichotomous manifest variables.…

  17. Some Contrasts between Maximum Likelihood Factor Analysis and Alpha Factor Analysis.

    ERIC Educational Resources Information Center

    Kaiser, Henry F.; Derflinger, Gerhard

    1990-01-01

    The fundamental mathematical model of L. L. Thurstone's common factor analysis is reviewed, and basic covariance matrices of maximum likelihood factor analysis and alpha factor analysis are presented. The methods are compared in terms of computational and scaling contrasts. Weighting and the appropriate number of common factors are considered.…

  18. Suboptimal maximum likelihood detection of on-off keying for a wireless optical communication system.

    PubMed

    Li, Ruijie; Ding, Shengli; Dang, Anhong

    2017-05-01

    This paper investigates a detection scheme without channel state information for wireless optical communication systems. Employing conventional on-off keying signals, we supposed that conditional probability density function P(r|0) is much bigger than P(r|1) when r<0. Under this assumption, the suboptimal maximum likelihood scheme is obtained by utilizing the probability density function without channel information. Theoretical analysis shows the performance of the proposed scheme is close to the maximum likelihood symbol-by-symbol detection. Compared with the maximum likelihood symbol by symbol detection, Monte Carlo simulations show that the performance of the proposed scheme is about 0.62 dB loss for a gamma-gamma channel with a Rytov variance of 1 at the signal-to-noise ratio of 2 dB, but the efficient algorithm makes the real-time implementation of detection based on maximum likelihood feasible. Besides, the experiment is set up under 2 Gbps, and the experimental results match well with that of the theory and simulation.

  19. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  20. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  1. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  2. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  3. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  4. A novel method based on maximum likelihood estimation for the construction of seismic fragility curves using numerical simulations

    NASA Astrophysics Data System (ADS)

    Dang, Cong-Thuat; Le, Thien-Phu; Ray, Pascal

    2017-10-01

    Seismic fragility curves presenting some probability of failure or of a damage state exceedance versus seismic intensity can be established by engineering judgment, empirical or numerical approaches. This paper focuses on the latter issue. In recent studies, three popular methods based on numerical simulations, comprising scaled seismic intensity, maximum likelihood estimation and probabilistic seismic demand/capacity models, have been studied and compared. The results obtained show that the maximum likelihood estimation (MLE) method is in general better than other ones. However, previous publications also indicated the dependence of the MLE method on the ground excitation input. The objective of this paper is thus to propose a novel method improving the existing MLE one. Improvements are based on probabilistic ground motion information, which is taken into account in the proposed procedure. The validity of this new approach is verified by analytical tests and numerical examples.

  5. Analyzing pathogen suppressiveness in bioassays with natural soils using integrative maximum likelihood methods in R.

    PubMed

    Rall, Björn C; Latz, Ellen

    2016-01-01

    The potential of soils to naturally suppress inherent plant pathogens is an important ecosystem function. Usually, pathogen infection assays are used for estimating the suppressive potential of soils. In natural soils, however, co-occurring pathogens might simultaneously infect plants complicating the estimation of a focal pathogen's infection rate (initial slope of the infection-curve) as a measure of soil suppressiveness. Here, we present a method in R correcting for these unwanted effects by developing a two pathogen mono-molecular infection model. We fit the two pathogen mono-molecular infection model to data by using an integrative approach combining a numerical simulation of the model with an iterative maximum likelihood fit. We show that in presence of co-occurring pathogens using uncorrected data leads to a critical under- or overestimation of soil suppressiveness measures. In contrast, our new approach enables to precisely estimate soil suppressiveness measures such as plant infection rate and plant resistance time. Our method allows a correction of measured infection parameters that is necessary in case different pathogens are present. Moreover, our model can be (1) adapted to use other models such as the logistic or the Gompertz model; and (2) it could be extended by a facilitation parameter if infections in plants increase the susceptibility to new infections. We propose our method to be particularly useful for exploring soil suppressiveness of natural soils from different sites (e.g., in biodiversity experiments).

  6. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    PubMed Central

    Benevides, Leandro de Jesus; de Carvalho, Daniel Santana; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-01-01

    Abstract Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27560837

  7. Analyzing pathogen suppressiveness in bioassays with natural soils using integrative maximum likelihood methods in R

    PubMed Central

    Latz, Ellen

    2016-01-01

    The potential of soils to naturally suppress inherent plant pathogens is an important ecosystem function. Usually, pathogen infection assays are used for estimating the suppressive potential of soils. In natural soils, however, co-occurring pathogens might simultaneously infect plants complicating the estimation of a focal pathogen’s infection rate (initial slope of the infection-curve) as a measure of soil suppressiveness. Here, we present a method in R correcting for these unwanted effects by developing a two pathogen mono-molecular infection model. We fit the two pathogen mono-molecular infection model to data by using an integrative approach combining a numerical simulation of the model with an iterative maximum likelihood fit. We show that in presence of co-occurring pathogens using uncorrected data leads to a critical under- or overestimation of soil suppressiveness measures. In contrast, our new approach enables to precisely estimate soil suppressiveness measures such as plant infection rate and plant resistance time. Our method allows a correction of measured infection parameters that is necessary in case different pathogens are present. Moreover, our model can be (1) adapted to use other models such as the logistic or the Gompertz model; and (2) it could be extended by a facilitation parameter if infections in plants increase the susceptibility to new infections. We propose our method to be particularly useful for exploring soil suppressiveness of natural soils from different sites (e.g., in biodiversity experiments). PMID:27833800

  8. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases.

  9. Adapting Predictive Models for Cepheid Variable Star Classification Using Linear Regression and Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas

    2014-05-01

    We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).

  10. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  11. Sparse Randomized Maximum Likelihood (SpRML) for subsurface flow model calibration and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammadreza M.; Jafarpour, Behnam

    2014-07-01

    Despite their apparent high dimensionality, spatially distributed hydraulic properties of geologic formations can often be compactly (sparsely) described in a properly designed basis. Hence, the estimation of high-dimensional subsurface flow properties from dynamic performance and monitoring data can be formulated and solved as a sparse reconstruction inverse problem. Recent advances in statistical signal processing, formalized under the compressed sensing paradigm, provide important guidelines on formulating and solving sparse inverse problems, primarily for linear models and using a deterministic framework. Given the uncertainty in describing subsurface physical properties, even after integration of the dynamic data, it is important to develop a practical sparse Bayesian inversion approach to enable uncertainty quantification. In this paper, we use sparse geologic dictionaries to compactly represent uncertain subsurface flow properties and develop a practical sparse Bayesian method for effective data integration and uncertainty quantification. The multi-Gaussian assumption that is widely used in classical probabilistic inverse theory is not appropriate for representing sparse prior models. Following the results presented by the compressed sensing paradigm, the Laplace (or double exponential) probability distribution is found to be more suitable for representing sparse parameters. However, combining Laplace priors with the frequently used Gaussian likelihood functions leads to neither a Laplace nor a Gaussian posterior distribution, which complicates the analytical characterization of the posterior. Here, we first express the form of the Maximum A-Posteriori (MAP) estimate for Laplace priors and then use the Monte-Carlo-based Randomize Maximum Likelihood (RML) method to generate approximate samples from the posterior distribution. The proposed Sparse RML (SpRML) approximate sampling approach can be used to assess the uncertainty in the calibrated model with a

  12. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  13. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  14. Integrating functional genomics data using maximum likelihood based simultaneous component analysis

    PubMed Central

    van den Berg, Robert A; Van Mechelen, Iven; Wilderjans, Tom F; Van Deun, Katrijn; Kiers, Henk AL; Smilde, Age K

    2009-01-01

    data blocks could benefit from its ability to take different noise levels per data block into consideration and improve the recovery of the true patterns underlying the data. Moreover, the maximum likelihood based approach underlying MxLSCA-P could be extended to custom-made solutions to specific problems encountered. PMID:19835617

  15. MAGPI: A Framework for Maximum Likelihood MR Phase Imaging Using Multiple Receive Coils

    PubMed Central

    Dagher, Joseph; Nael, Kambiz

    2015-01-01

    Purpose Combining MR phase images from multiple receive coils is a challenging problem, complicated by ambiguities introduced by phase wrapping, noise and the unknown phase-offset between the coils. Various techniques have been proposed to mitigate the effect of these ambiguities but most of the existing methods require additional reference scans and/or use ad-hoc post-processing techniques that do not guarantee any optimality. Theory and Methods Here, the phase estimation problem is formulated rigorously using a Maximum-Likelihood (ML) approach. The proposed framework jointly designs the acquisition-processing chain: the optimized pulse sequence is a single Multi-Echo Gradient Echo scan and the corresponding post-processing algorithm is a voxel-per-voxel ML estimator of the underlying tissue phase. Results Our proposed framework (MAGPI) achieves substantial improvements in the phase estimate, resulting in phase SNR gains by up to an order of magnitude compared to existing methods. Conclusion The advantages of MAGPI are: (1) ML-optimal combination of phase data from multiple receive coils, without a reference scan; (2) ML-optimal estimation of the underlying tissue phase, without the need for spatial processing; and (3) robust dynamic estimation of channel-dependent phase-offsets. PMID:25946426

  16. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  17. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  18. Comparative study of maximum likelihood and spectral angle mapper algorithms used for automated detection of melanoma.

    PubMed

    Ibraheem, I

    2015-02-01

    Melanoma is a leading fatal illness responsible for 80% of deaths from skin cancer. It originates in the pigment-producing melanocytes in the basal layer of the epidermis. Melanocytes produce the melanin (the dark pigment), which is responsible for the color of skin. As all cancers, melanoma is caused by damage to the DNA of the cells, which causes the cell to grow out of control, leading to a tumor, which is much more dangerous if it cannot be found or detected early. Only biopsy can determine exact malformation diagnosis, although it can rise metastasizing. When a melanoma is suspected, the usual standard procedure is to perform a biopsy and to subsequently analyze the suspicious tissue under the microscope. In this paper, we provide a new approach using methods known as 'imaging spectroscopy' or 'spectral imaging' for early detection of melanoma using two different supervised classifier algorithms, maximum likelihood (ML) and spectral angle mapper (SAM). SAM rests on the spectral 'angular distances' and the conventional classifier ML rests on the spectral distance concept. The results show that the ML classifier was more efficient for pixel classification than SAM. However, SAM was more suitable for object classification. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program.

  20. Object recognition and localization from 3D point clouds by maximum-likelihood estimation.

    PubMed

    Dantanarayana, Harshana G; Huntley, Jonathan M

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike 'interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  1. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  2. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  3. Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods.

    PubMed

    Lele, Subhash R; Dennis, Brian; Lutscher, Frithjof

    2007-07-01

    We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise.

  4. Maximum likelihood inference of the evolutionary history of a PPI network from the duplication history of its proteins.

    PubMed

    Li, Si; Choi, Kwok Pui; Wu, Taoyang; Zhang, Louxin

    2013-01-01

    Evolutionary history of protein-protein interaction (PPI) networks provides valuable insight into molecular mechanisms of network growth. In this paper, we study how to infer the evolutionary history of a PPI network from its protein duplication relationship. We show that for a plausible evolutionary history of a PPI network, its relative quality, measured by the so-called loss number, is independent of the growth parameters of the network and can be computed efficiently. This finding leads us to propose two fast maximum likelihood algorithms to infer the evolutionary history of a PPI network given the duplication history of its proteins. Simulation studies demonstrated that our approach, which takes advantage of protein duplication information, outperforms NetArch, the first maximum likelihood algorithm for PPI network history reconstruction. Using the proposed method, we studied the topological change of the PPI networks of the yeast, fruitfly, and worm.

  5. A comparison of feed-forward networks and maximum likelihood on a point-source location problem

    NASA Astrophysics Data System (ADS)

    Webb, Andrew R.

    1991-04-01

    The problem of point source location using a multibeam focal plane staring array radar is considered. It is viewed as one in functional approximation in which the position of the source is regarded as a nonlinear function of the sampled radar image. An approximant is constructed, using a training set, which minimizes the mean square error in the position estimate. The problem of generalization is discussed. Two feed forward network architectures are considered: a particular radial basis function network which arises as a consequence of the minimum mean square error solution and is appropriate when the signal to noise ratio is 'small', and a multilayer perceptron, chosen for high signal to noise ratio approximation. The errors in the position estimates for each of these approaches are compared with a maximum likelihood position estimation method. The maximum likelihood method gives better overall performance and has the advantage that it is not dependent on the signal to noise ratio.

  6. Missing data in longitudinal studies: cross-sectional multiple imputation provides similar estimates to full-information maximum likelihood.

    PubMed

    Ferro, Mark A

    2014-01-01

    The aim of this research was to examine, in an exploratory manner, whether cross-sectional multiple imputation generates valid parameter estimates for a latent growth curve model in a longitudinal data set with nonmonotone missingness. A simulated longitudinal data set of N = 5000 was generated and consisted of a continuous dependent variable, assessed at three measurement occasions and a categorical time-invariant independent variable. Missing data had a nonmonotone pattern and the proportion of missingness increased from the initial to the final measurement occasion (5%-20%). Three methods were considered to deal with missing data: listwise deletion, full-information maximum likelihood, and multiple imputation. A latent growth curve model was specified and analysis of variance was used to compare parameter estimates between the full data set and missing data approaches. Multiple imputation resulted in significantly lower slope variance compared with the full data set. There were no differences in any parameter estimates between the multiple imputation and full-information maximum likelihood approaches. This study suggested that in longitudinal studies with nonmonotone missingness, cross-sectional imputation at each time point may be viable and produces estimates comparable with those obtained with full-information maximum likelihood. Future research pursuing the validity of this method is warranted. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  8. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  9. Constrained Maximum Likelihood Estimation for Two-level Mean and Covariance Structure Models.

    PubMed

    Bentler, Peter M; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-03-22

    Maximum likelihood is commonly used for estimation of model parameters in analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in maximum likelihood analysis. Nonlinear constraints could be encountered in complicated applications. In this paper we develop an EM-type algorithm for estimating model parameters with both linear and nonlinear constraints. The empirical performance of the algorithm is demonstrated by a Monte Carlo study. Application of the algorithm for linear constraints is illustrated by setting up a two-level mean and covariance structure model for a real two-level data set and running an EQS program.

  10. A statistical technique for processing radio interferometer data. [using maximum likelihood algorithm

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. D.

    1975-01-01

    The output of a radio interferometer is the Fourier transform of the object under investigation. Due to the limited coverage of the Fourier plane, the reconstruction of the image of the source is blurred by the beam of the synthesized array. A maximum-likelihood processing technique is described which uses the statistical properties of the received noise-like signals. This technique has been used extensively in the processing of large-aperture seismic arrays. This inversion method results in a synthesized beam that is more uniform, has lower sidelobes, and higher resolution than the normal Fourier transform methods. The maximum-likelihood method algorithm was applied successfully to very long baseline and short baseline interferometric data.

  11. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    PubMed Central

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  12. Comparison of minimum-norm maximum likelihood and maximum a posteriori wavefront reconstructions for large adaptive optics systems.

    PubMed

    Béchet, Clémentine; Tallon, Michel; Thiébaut, Eric

    2009-03-01

    The performances of various estimators for wavefront sensing applications such as adaptive optics (AO) are compared. Analytical expressions for the bias and variance terms in the mean squared error (MSE) are derived for the minimum-norm maximum likelihood (MNML) and the maximum a posteriori (MAP) reconstructors. The MAP estimator is analytically demonstrated to yield an optimal trade-off that reduces the MSE, hence leading to a better Strehl ratio. The implications for AO applications are quantified thanks to simulations on 8-m- and 42-m-class telescopes. We show that the MAP estimator can achieve twice as low MSE as MNML methods do. Large AO systems can thus benefit from the high quality of MAP reconstruction in O(n) operations, thanks to the fast fractal iterative method (FrIM) algorithm (Thiébaut and Tallon, submitted to J. Opt. Soc. Am. A).

  13. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  14. Extraction of the Michel parameters from muon decay by a maximum likelihood method

    SciTech Connect

    McKee, R.J.

    1980-01-01

    A maximum likelihood method was developed for extracting the Michel parameters, which describe the decay spectrum of stopped polarized muons. The method was applied successfully in a Monte Carlo simulation of apparatus. A complete error matrix for the set of Michel parameters was also obtained. In an experiment of 10/sup 8/ muon decays the statistical errors can be reduced by more than an order of magnitude over the currently accepted limits on the Michel parameters. 5 tables.

  15. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    NASA Astrophysics Data System (ADS)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk; Arlunno, Valeria; Caballero, Antonio; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-03-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) and against differential group delay (DGD) in an experiment involving 112 Gbit/s polarization-division multiplexed (PDM) 16-ary quadrature amplitude modulation (16 QAM) and quaternary phase-shift keying (QPSK).

  16. Maximum likelihood-based iterated divided difference filter for nonlinear systems from discrete noisy measurements.

    PubMed

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate.

  17. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  18. A calibration method of self-referencing interferometry based on maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei; Guo, Guangrao

    2015-05-01

    Self-referencing interferometry has been widely used in wavefront sensing. However, currently the results of wavefront measurement include two parts, one is the real phase information of wavefront under test and the other is the system error in self-referencing interferometer. In this paper, a method based on maximum likelihood estimation is presented to calibrate the system error in self-referencing interferometer. Firstly, at least three phase difference distributions are obtained by three position measurements of the tested component: one basic position, one rotation and one lateral translation. Then, combining the three phase difference data and using the maximum likelihood method to create a maximum likelihood function, reconstructing the wavefront under test and the system errors by least square estimation and Zernike polynomials. The simulation results show that the proposed method can deal with the issue of calibration of a self-referencing interferometer. The method can be used to reduce the effect of system errors on extracting and reconstructing the wavefront under test, and improve the measurement accuracy of the self-referencing interferometer.

  19. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  20. Maximum likelihood principal component analysis of time-of-flight secondary ion mass spectrometry spectral images

    NASA Astrophysics Data System (ADS)

    Keenan, Michael R.

    2005-07-01

    Many modern surface analytical instruments are able to acquire huge amounts of data in the form of spectral images. Time-of-flight secondary ion mass spectrometry (TOF-SIMS), for instance, can easily generate a complete mass spectrum at each point in a two-dimensional or three-dimensional spatial array. The challenge for the data analyst, then, is to garner the analytically useful information from the overwhelming quantity of raw spectral data. Factor analysis techniques such as principal component analysis (PCA) have proven quite useful in this endeavor. Standard PCA, however, assumes that noise in the data is uniform, that is, that it does not depend on the magnitude of signal. This is clearly not correct for methods that rely on particle counting where the noise is governed by Poisson statistics. In this case, properly accounting for heteroscedasticity is essential to extracting the chemical information into a minimum number of factors while optimally excluding noise. Maximum likelihood PCA (MLPCA) is one approach to addressing this issue. MLPCA can, in principle, incorporate a separate uncertainty estimate for each individual observation in a data set. This article will present a MLPCA analysis of a simple and intuitive TOF-SIMS spectral image. The results show that there is a trade-off between the number of uncertainty parameters included in the model and the quality of each and, in fact, using poor estimates may be worse than ignoring the noise characteristics altogether. The best results were obtained by using a low-rank approximation to the noise rather than individual estimates. MLPCA will also be compared with an optimal scaling approach. For the particular example given, the added benefits of MLPCA do not outweigh the greatly increased computational demands of the technique.

  1. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  2. Maximum-LikelihoodSemiblind Equalization of Doubly Selective Channels Using the EM Algorithm

    NASA Astrophysics Data System (ADS)

    Kutz, Gideon; Raphaeli, Dan

    2010-12-01

    Maximum-likelihood semi-blind joint channel estimation and equalization for doubly selective channels and single-carrier systems is proposed. We model the doubly selective channel as an FIR filter where each filter tap is modeled as a linear combination of basis functions. This channel description is then integrated in an iterative scheme based on the expectation-maximization (EM) principle that converges to the channel description vector estimation. We discuss the selection of the basis functions and compare various functions sets. To alleviate the problem of convergence to a local maximum, we propose an initialization scheme to the EM iterations based on a small number of pilot symbols. We further derive a pilot positioning scheme targeted to reduce the probability of convergence to a local maximum. Our pilot positioning analysis reveals that for high Doppler rates it is better to spread the pilots evenly throughout the data block (and not to group them) even for frequency-selective channels. The resulting equalization algorithm is shown to be superior over previously proposed equalization schemes and to perform in many cases close to the maximum-likelihood equalizer with perfect channel knowledge. Our proposed method is also suitable for coded systems and as a building block for Turbo equalization algorithms.

  3. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  4. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  5. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  7. Bootstrap, Bayesian probability and maximum likelihood mapping: exploring new tools for comparative genome analyses

    PubMed Central

    Zhaxybayeva, Olga; Gogarten, J Peter

    2002-01-01

    Background Horizontal gene transfer (HGT) played an important role in shaping microbial genomes. In addition to genes under sporadic selection, HGT also affects housekeeping genes and those involved in information processing, even ribosomal RNA encoding genes. Here we describe tools that provide an assessment and graphic illustration of the mosaic nature of microbial genomes. Results We adapted the Maximum Likelihood (ML) mapping to the analyses of all detected quartets of orthologous genes found in four genomes. We have automated the assembly and analyses of these quartets of orthologs given the selection of four genomes. We compared the ML-mapping approach to more rigorous Bayesian probability and Bootstrap mapping techniques. The latter two approaches appear to be more conservative than the ML-mapping approach, but qualitatively all three approaches give equivalent results. All three tools were tested on mitochondrial genomes, which presumably were inherited as a single linkage group. Conclusions In some instances of interphylum relationships we find nearly equal numbers of quartets strongly supporting the three possible topologies. In contrast, our analyses of genome quartets containing the cyanobacterium Synechocystis sp. indicate that a large part of the cyanobacterial genome is related to that of low GC Gram positives. Other groups that had been suggested as sister groups to the cyanobacteria contain many fewer genes that group with the Synechocystis orthologs. Interdomain comparisons of genome quartets containing the archaeon Halobacterium sp. revealed that Halobacterium sp. shares more genes with Bacteria that live in the same environment than with Bacteria that are more closely related based on rRNA phylogeny . Many of these genes encode proteins involved in substrate transport and metabolism and in information storage and processing. The performed analyses demonstrate that relationships among prokaryotes cannot be accurately depicted by or inferred from

  8. A Survey of the Likelihood Approach to Bioequivalence Trials

    PubMed Central

    Choi, Leena; Caffo, Brian; Rohde, Charles

    2009-01-01

    SUMMARY Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is “equivalent” to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence. We discuss how the likelihood approach is useful to present the evidence for both average and population bioequivalence within a unified framework. We also examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods offer a viable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. PMID:18618422

  9. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  10. Method and apparatus for implementing a traceback maximum-likelihood decoder in a hypercube network

    NASA Technical Reports Server (NTRS)

    Pollara-Bozzola, Fabrizio (Inventor)

    1989-01-01

    A method and a structure to implement maximum-likelihood decoding of convolutional codes on a network of microprocessors interconnected as an n-dimensional cube (hypercube). By proper reordering of states in the decoder, only communication between adjacent processors is required. Communication time is limited to that required for communication only of the accumulated metrics and not the survivor parameters of a Viterbi decoding algorithm. The survivor parameters are stored at a local processor's memory and a trace-back method is employed to ascertain the decoding result. Faster and more efficient operation is enabled, and decoding of large constraint length codes is feasible using standard VLSI technology.

  11. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  12. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  13. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  14. An inconsistency in the standard maximum likelihood estimation of bulk flows

    SciTech Connect

    Nusser, Adi

    2014-11-01

    Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.

  15. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  16. Maximum-likelihood estimation optimizer for constrained, time-optimal satellite reorientation

    NASA Astrophysics Data System (ADS)

    Melton, Robert G.

    2014-10-01

    The Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES) method provides a high-quality estimate of the control solution for an unconstrained satellite reorientation problem, and rapid, useful guesses needed for high-fidelity methods that can solve time-optimal reorientation problems with multiple path constraints. The CMA-ES algorithm offers two significant advantages over heuristic methods such as Particle Swarm or Bacteria Foraging Optimisation: it builds an approximation to the covariance matrix for the cost function, and uses that to determine a direction of maximum likelihood for the search, reducing the chance of stagnation; and it achieves second-order, quasi-Newton convergence behaviour.

  17. Phase Noise Investigation of Maximum Likelihood Estimation Method for Airborne Multibaseline SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Magnard, C.; Small, D.; Meier, E.

    2015-03-01

    The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.

  18. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  19. Maximum likelihood parametric blur identification based on a continuous spatial domain model.

    PubMed

    Pavlovic, G; Tekalp, A M

    1992-01-01

    A formulation for maximum-likelihood (ML) blur identification based on parametric modeling of the blur in the continuous spatial coordinates is proposed. Unlike previous ML blur identification methods based on discrete spatial domain blur models, this formulation makes it possible to find the ML estimate of the extent, as well as other parameters, of arbitrary point spread functions that admit a closed-form parametric description in the continuous coordinates. Experimental results are presented for the cases of 1-D uniform motion blur, 2-D out-of-focus blur, and 2-D truncated Gaussian blur at different signal-to-noise ratios.

  20. Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters.

    PubMed

    Pascazio, Vito; Schirinzi, Gilda

    2002-01-01

    In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.

  1. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  2. Maximum likelihood estimation of the mixture of log-concave densities.

    PubMed

    Hu, Hao; Wu, Yichao; Yao, Weixin

    2016-09-01

    Finite mixture models are useful tools and can be estimated via the EM algorithm. A main drawback is the strong parametric assumption about the component densities. In this paper, a much more flexible mixture model is considered, which assumes each component density to be log-concave. Under fairly general conditions, the log-concave maximum likelihood estimator (LCMLE) exists and is consistent. Numeric examples are also made to demonstrate that the LCMLE improves the clustering results while comparing with the traditional MLE for parametric mixture models.

  3. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  4. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  5. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    NASA Astrophysics Data System (ADS)

    Chen, C. E.; Lorenzelli, F.; Hudson, R. E.; Yao, K.

    2007-12-01

    We investigate the maximum likelihood (ML) direction-of-arrival (DOA) estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB) has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML) requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML) attain a solution close to the derived CRB at high signal-to-noise ratio.

  6. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  7. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  8. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Maximum likelihood optimal and robust Support Vector Regression with lncosh loss function.

    PubMed

    Karal, Omer

    2017-10-01

    In this paper, a novel and continuously differentiable convex loss function based on natural logarithm of hyperbolic cosine function, namely lncosh loss, is introduced to obtain Support Vector Regression (SVR) models which are optimal in the maximum likelihood sense for the hyper-secant error distributions. Most of the current regression models assume that the distribution of error is Gaussian, which corresponds to the squared loss function and has helpful analytical properties such as easy computation and analysis. However, in many real world applications, most observations are subject to unknown noise distributions, so the Gaussian distribution may not be a useful choice. The developed SVR model with the parameterized lncosh loss provides a possibility of learning a loss function leading to a regression model which is maximum likelihood optimal for a specific input-output data. The SVR models obtained with different parameter choices of lncosh loss with ε-insensitiveness feature, possess most of the desirable characteristics of well-known loss functions such as Vapnik's loss, the Squared loss, and Huber's loss function as special cases. In other words, it is observed in the extensive simulations that the mentioned lncosh loss function is entirely controlled by a single adjustable λ parameter and as a result, it allows switching between different losses depending on the choice of λ. The effectiveness and feasibility of lncosh loss function are validated through a number of synthetic and real world benchmark data sets for various types of additive noise distributions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  11. Benefits of maximum likelihood estimators for fracture attribute analysis: Implications for permeability and up-scaling

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2017-02-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. By applying Maximum Likelihood Estimators on outcrop fracture data, we generate fracture network models with the same statistical attributes to the ones observed on outcrop, from which we can achieve more robust predictions of bulk permeability.

  12. Maximum likelihood, multiple imputation and regression calibration for measurement error adjustment.

    PubMed

    Messer, Karen; Natarajan, Loki

    2008-12-30

    In epidemiologic studies of exposure-disease association, often only a surrogate measure of exposure is available for the majority of the sample. A validation sub-study may be conducted to estimate the relation between the surrogate measure and true exposure levels. In this article, we discuss three methods of estimation for such a main study/validation study design: (i) maximum likelihood (ML), (ii) multiple imputation (MI) and (iii) regression calibration (RC). For logistic regression, we show how each method depends on a different numerical approximation to the likelihood, and we adapt standard software to compute both MI and ML estimates. We use simulation to compare the performance of the estimators for both realistic and extreme settings, and for both internal and external validation designs. Our results indicate that with large measurement error or large enough sample sizes, ML performs as well as or better than MI and RC. However, for smaller measurement error and small sample sizes, either ML or RC may have the advantage. Interestingly, in most cases the relative advantage of RC versus ML was determined by the relative variance rather than the bias of the estimators. Software code for all three methods in SAS is provided.

  13. Neandertal Admixture in Eurasia Confirmed by Maximum-Likelihood Analysis of Three Genomes

    PubMed Central

    Lohse, Konrad; Frantz, Laurent A. F.

    2014-01-01

    Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4−7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination. PMID:24532731

  14. Maximum Likelihood Mapping of Quantitative Trait Loci Using Full-Sib Families

    PubMed Central

    Knott, S. A.; Haley, C. S.

    1992-01-01

    A maximum likelihood method is presented for the detection of quantitative trait loci (QTL) using flanking markers in full-sib families. This method incorporates a random component for common family effects due to additional QTL or the environment. Simulated data have been used to investigate this method. With a fixed total number of full sibs power of detection decreased substantially with decreasing family size. Increasing the number of alleles at the marker loci (i.e., polymorphism information content) and decreasing the interval size about the QTL increased power. Flanking markers were more powerful than single markers. In testing for a linked QTL the test must be made against a model which allows for between family variation (i.e., including an unlinked QTL or a between family variance component) or the test statistic may be grossly inflated. Mean parameter estimates were close to the simulated values in all situations when fitting the full model (including a linked QTL and common family effect). If the common family component was omitted the QTL effect was overestimated in data in which additional genetic variance was simulated and when compared with an unlinked QTL model there was reduced power. The test statistic curves, reflecting the likelihood of the QTL at each position along the chromosome, have discontinuities at the markers caused by adjacent pairs of markers providing different amounts of information. This must be accounted for when using flanking markers to search for a QTL in an outbred population. PMID:1459438

  15. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  16. Item exposure control for multidimensional computer adaptive testing under maximum likelihood and expected a posteriori estimation.

    PubMed

    Huebner, Alan R; Wang, Chun; Quinlan, Kari; Seubert, Lauren

    2016-12-01

    Item bank stratification has been shown to be an effective method for combating item overexposure in both uni- and multidimensional computer adaptive testing. However, item bank stratification cannot guarantee that items will not be overexposed-that is, exposed at a rate exceeding some prespecified threshold. In this article, we propose enhancing stratification for multidimensional computer adaptive tests by combining it with the item eligibility method, a technique for controlling the maximum exposure rate in computerized tests. The performance of the method was examined via a simulation study and compared to existing methods of item selection and exposure control. Also, for the first time, maximum likelihood (MLE) and expected a posteriori (EAP) estimation of examinee ability were compared side by side in a multidimensional computer adaptive test. The simulation suggested that the proposed method is effective in suppressing the maximum item exposure rate with very little loss of measurement accuracy and precision. As compared to MLE, EAP generates smaller mean squared errors of the ability estimates in all simulation conditions.

  17. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics.

    PubMed

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-Sung; Maki, Kara L; Ross, David S; Aquavella, James V; Rolland, Jannick P

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/- 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision.

  18. Full Information Maximum Likelihood Estimation for Latent Variable Interactions With Incomplete Indicators.

    PubMed

    Cham, Heining; Reshetnyak, Evgeniya; Rosenfeld, Barry; Breitbart, William

    2017-01-01

    Researchers have developed missing data handling techniques for estimating interaction effects in multiple regression. Extending to latent variable interactions, we investigated full information maximum likelihood (FIML) estimation to handle incompletely observed indicators for product indicator (PI) and latent moderated structural equations (LMS) methods. Drawing on the analytic work on missing data handling techniques in multiple regression with interaction effects, we compared the performance of FIML for PI and LMS analytically. We performed a simulation study to compare FIML for PI and LMS. We recommend using FIML for LMS when the indicators are missing completely at random (MCAR) or missing at random (MAR) and when they are normally distributed. FIML for LMS produces unbiased parameter estimates with small variances, correct Type I error rates, and high statistical power of interaction effects. We illustrated the use of these methods by analyzing the interaction effect between advanced cancer patients' depression and change of inner peace well-being on future hopelessness levels.

  19. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    NASA Astrophysics Data System (ADS)

    Prix, Reinhard; Krishnan, Badri

    2009-10-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  20. Incorporation of spatial information in Bayesian image reconstruction - The maximum residual likelihood criterion

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1992-01-01

    We have developed a new figure of merit, a 'maximum-residual-likelihood' (MRL) statistic, for the goodness of fit for Bayesian image restoration which explicitly incorporates spatial information. The MRL constraint provides a natural means of incorporating the prior knowledge that the residuals contain no spatial structure through the autocorrelation function of the residuals. We demonstrate that this statistic follows a Chi-square distribution and that forcing this statistic to have its most probable value leads to a restored image whose residuals are consistent with the noise model. Our numerical experiments suggest that image restoration using the MRL statistic alone is numerically robust and produces results which are independent of the initial guess for the restored image. However, we caution that using the MRL statistic without an image prior can result in overresolution in low SNR portions of the image.

  1. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    SciTech Connect

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; USA, Richland Washington; Carlson, Thomas J.; USA, Richland Washington

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  2. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    DOE PAGES

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; ...

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developedmore » using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.« less

  3. Determination of instrumentation errors from measured data using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Klein, V.

    1980-01-01

    The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.

  4. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  5. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  6. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  7. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  8. Incorporation of spatial information in Bayesian image reconstruction - The maximum residual likelihood criterion

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1992-01-01

    We have developed a new figure of merit, a 'maximum-residual-likelihood' (MRL) statistic, for the goodness of fit for Bayesian image restoration which explicitly incorporates spatial information. The MRL constraint provides a natural means of incorporating the prior knowledge that the residuals contain no spatial structure through the autocorrelation function of the residuals. We demonstrate that this statistic follows a Chi-square distribution and that forcing this statistic to have its most probable value leads to a restored image whose residuals are consistent with the noise model. Our numerical experiments suggest that image restoration using the MRL statistic alone is numerically robust and produces results which are independent of the initial guess for the restored image. However, we caution that using the MRL statistic without an image prior can result in overresolution in low SNR portions of the image.

  9. A maximum likelihood analysis of the CoGeNT public dataset

    NASA Astrophysics Data System (ADS)

    Kelso, Chris

    2016-06-01

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  10. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.

    PubMed

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-07-08

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement.

  11. Estimation of shape β parameter in Kumaraswamy distribution using Maximum Likelihood and Bayes method

    NASA Astrophysics Data System (ADS)

    Simbolon, H. G.; Fithriani, I.; Nurrohmah, S.

    2017-07-01

    This paper discusses the Maximum Likelihood (ML) and Bayes method for estimating the shape β parameter in Kumaraswamy distribution. Both methods will be compared according to Mean Square Error (MSE) obtained from each estimator. In the Bayes method, two Loss functions will be used, i.e., the Square Error Loss Function (SELF) and Precautionary Loss Function (PLF). Then, the Posterior Risk obtained from both loss functions will be compared. The comparison will be applied to hydrological data as a recommendation for the best method of representing the data. Hydrological data used in this study is a water storage in Shasta Reservoir, obtained from the California Data Exchange Center. By using the Mathematica Software and the formulas from both methods one obtains a statistic which can nicely describe the data and also predict the next observation of a reservoir in a certain time.

  12. Comparative analysis of the performance of laser Doppler systems using maximum likelihood and phase increment methods

    NASA Astrophysics Data System (ADS)

    Sobolev, V. S.; Zhuravel', F. A.; Kashcheeva, G. A.

    2016-11-01

    This paper presents a comparative analysis of the errors of two alternative methods of estimating the central frequency of signals of laser Doppler systems, one of which is based on the maximum likelihood criterion and the other on the so-called pulse-pair technique. Using computer simulation, the standard deviations of the Doppler signal frequency from its true values are determined for both methods and plots of the ratios of these deviations as a measure of the accuracy gain of one of them are constructed. The results can be used by developers of appropriate systems to choose an optimal algorithm of signal processing based on a compromise between the accuracy and speed of the systems as well as the labor intensity of calculations.

  13. Performance of default risk model with barrier option framework and maximum likelihood estimation: Evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chou, Heng-Chih; Wang, David

    2007-11-01

    We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.

  14. Maximum likelihood estimation and the multivariate Bernoulli distribution: An application to reliability

    SciTech Connect

    Kvam, P.H.

    1994-08-01

    We investigate systems designed using redundant component configurations. If external events exist in the working environment that cause two or more components in the system to fail within the same demand period, the designed redundancy in the system can be quickly nullified. In the engineering field, such events are called common cause failures (CCFs), and are primary factors in some risk assessments. If CCFs have positive probability, but are not addressed in the analysis, the assessment may contain a gross overestimation of the system reliability. We apply a discrete, multivariate shock model for a parallel system of two or more components, allowing for positive probability that such external events can occur. The methods derived are motivated by attribute data for emergency diesel generators from various US nuclear power plants. Closed form solutions for maximum likelihood estimators exist in many cases; statistical tests and confidence intervals are discussed for the different test environments considered.

  15. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  16. Effective Two-Dimensional Partial Response Maximum Likelihood Detection Scheme for Holographic Data Storage Systems

    NASA Astrophysics Data System (ADS)

    Kong, Gyuyeol; Choi, Sooyong

    2012-08-01

    An effective two-dimensional (2D) partial response maximum likelihood (PRML) detection scheme for holographic data storage (HDS) systems is proposed. The proposed scheme adopts the simplified trellis diagram, uses a priori information, and detects the data in two directions from the previously proposed detection schemes. The simplified trellis diagram which has 4 states and 8 branches yields a dramatic complexity reduction while the simplified 2D PRML detector shows serious performance degradation in the high density HDS channels. To prevent performance degradation, the proposed detector uses a priori information in order to give higher reliability to the branch metric. Furthermore, the proposed scheme detects the data in the vertical and horizontal directions to fully utilize the characteristics of the channel detection with a 2D partial response target. By effective combination of these three techniques, the proposed scheme with a simple structure has more than 2 dB gains compared to the conventional detection schemes.

  17. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  18. Equivalence between modularity optimization and maximum likelihood methods for community detection

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.

    2016-11-01

    We demonstrate an equivalence between two widely used methods of community detection in networks, the method of modularity maximization and the method of maximum likelihood applied to the degree-corrected stochastic block model. Specifically, we show an exact equivalence between maximization of the generalized modularity that includes a resolution parameter and the special case of the block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.

  19. A maximum likelihood analysis of the CoGeNT public dataset

    SciTech Connect

    Kelso, Chris

    2016-06-21

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  20. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  1. Improved maximum likelihood detection for mitigating fading estimation error in free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Wu, Zhiyong; Zhang, Yaoyu; Detian, Huang

    2013-01-01

    To mitigate the impact of the error between the estimated channel fading coefficient and the perfect fading coefficient on the bit error rate (BER), a priori conditional probability density function averaging the estimation error is proposed. Then, an improved maximum-likelihood (ML) symbol-by-symbol detection is derived for the free-space optical communication systems, which implement pilot symbol assisted modulation. To reduce complexity, a closed-form suboptimal improved ML detection is deduced using distribution approximation. Numerical results confirm that BER performance improvement can be reached by the improved ML detection, and that its suboptimal version performs as well as it does. Therefore, they both outperform classical ML detection, which doses not consider channel estimation error.

  2. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  3. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis

    PubMed Central

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-01-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at. It is free and open to all users and there is no login requirement. PMID:27084950

  4. Galaxy and Mass Assembly (GAMA): maximum-likelihood determination of the luminosity function and its evolution

    NASA Astrophysics Data System (ADS)

    Loveday, J.; Norberg, P.; Baldry, I. K.; Bland-Hawthorn, J.; Brough, S.; Brown, M. J. I.; Driver, S. P.; Kelvin, L. S.; Phillipps, S.

    2015-08-01

    We describe modifications to the joint stepwise maximum-likelihood method of Cole in order to simultaneously fit the Galaxy and Mass Assembly II galaxy luminosity function (LF), corrected for radial density variations, and its evolution with redshift. The whole sample is reasonably well fitted with luminosity (Qe) and density (Pe) evolution parameters Qe, Pe ≈ 1.0, 1.0 but with significant degeneracies characterized by Qe ≈ 1.4 - 0.4Pe. Blue galaxies exhibit larger luminosity density evolution than red galaxies, as expected. We present the evolution-corrected r-band LF for the whole sample and for blue and red subsamples, using both Petrosian and Sérsic magnitudes. Petrosian magnitudes miss a substantial fraction of the flux of de Vaucouleurs profile galaxies: the Sérsic LF is substantially higher than the Petrosian LF at the bright end.

  5. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  6. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  7. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, T.A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored-data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet-Crame??r-Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real-time water quality monitoring.

  8. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.

  9. GFREG: a computer program for maximum likelihood regression using the Generalized F distribution.

    PubMed

    Hogg, S A; Ciampi, A

    1985-07-01

    A FORTRAN program is described for maximum likelihood estimation within the Generalized F family of distributions. It can be used to estimate regression parameters in a log-linear model for censored survival times with covariates, for which the error distribution may have a great variety of shapes, including most distributions of current use in biostatistics. The optimization is performed by an algorithm based on the generalized reduced gradient method. A stepwise variable search algorithm for covariate selection is included in the program. Output features include: model selection criteria, standard errors of parameter estimates, quantile and survival rates with their standard errors, residuals and several plots. An example based on data from Princess Margaret Hospital, Toronto, is discussed to illustrate the program's capabilities.

  10. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    NASA Astrophysics Data System (ADS)

    Waligórska, Marta; Molski, Andrzej

    2012-07-01

    When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Förster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  11. Real-time Data Acquisition and Maximum-Likelihood Estimation for Gamma Cameras

    PubMed Central

    Furenlid, L.R.; Hesterman, J.Y.; Barrett, H.H.

    2015-01-01

    We have developed modular gamma-ray cameras for biomedical imaging that acquire data with a raw list-mode acquisition architecture. All observations associated with a gamma-ray event, such as photomultiplier (PMT) signals and time, are assembled into an event packet and added to an ordered list of event entries that comprise the acquired data. In this work we present the design of the data-acquisition system, and discuss algorithms for a specialized computing engine to reside in the data path between the front and back ends of each camera and carry out maximum-likelihood position and energy estimations in real time while data was being acquired.. PMID:27066595

  12. Parsimonious estimation of sex-specific map distances by stepwise maximum likelihood regression

    SciTech Connect

    Fann, C.S.J.; Ott, J.

    1995-10-10

    In human genetic maps, differences between female (x{sub f}) and male (x{sub m}) map distances may be characterized by the ratio, R = x{sub f}/x{sub m}, or the relative difference, Q = (x{sub f} - x{sub m})/(x{sub f} + x{sub m}) = (R - 1)/(R + 1). For a map of genetic markers spread along a chromosome, Q(d) may be viewed as a graph of Q versus the midpoints, d, of the map intervals. To estimate male and female map distances for each interval, a novel method is proposed to evaluate the most parsimonious trend of Q(d) along the chromosome, where Q(d) is expressed as a polynomial in d. Stepwise maximum likelihood polynomial regression of Q is described. The procedure has been implemented in a FORTRAN program package, TREND, and is applied to data on chromosome 18. 11 refs., 2 figs., 3 tabs.

  13. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  14. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters.

    PubMed

    Li, Xinya; Deng, Z Daniel; Sun, Yannan; Martinez, Jayson J; Fu, Tao; McMichael, Geoffrey A; Carlson, Thomas J

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  15. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  16. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    PubMed Central

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  17. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  18. IM3SHAPE: a maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

    NASA Astrophysics Data System (ADS)

    Zuntz, Joe; Kacprzak, Tomasz; Voigt, Lisa; Hirsch, Michael; Rowe, Barnaby; Bridle, Sarah

    2013-09-01

    We present and describe IM3SHAPE, a new publicly available galaxy shape measurement code for weak gravitational lensing shear. IM3SHAPE performs a maximum likelihood fit of a bulge-plus-disc galaxy model to noisy images, incorporating an applied point spread function. We detail challenges faced and choices made in its design and implementation, and then discuss various limitations that affect this and other maximum likelihood methods. We assess the bias arising from fitting an incorrect galaxy model using simple noise-free images and find that it should not be a concern for current cosmic shear surveys. We test IM3SHAPE on the Gravitational Lensing Accuracy Testing 2008 (GREAT08) challenge image simulations, and meet the requirements for upcoming cosmic shear surveys in the case that the simulations are encompassed by the fitted model, using a simple correction for image noise bias. For the fiducial branch of GREAT08 we obtain a negligible additive shear bias and sub-two per cent level multiplicative bias, which is suitable for analysis of current surveys. We fall short of the sub-per cent level requirement for upcoming surveys, which we attribute to a combination of noise bias and the mismatch between our galaxy model and the model used in the GREAT08 simulations. We meet the requirements for current surveys across all branches of GREAT08, except those with small or high noise galaxies, which we would cut from our analysis. Using the GREAT08 metric we we obtain a score of Q = 717 for the usable branches, relative to the goal of Q = 1000 for future experiments. The code is freely available from https://bitbucket.org/joezuntz/im3shape

  19. Collaborative targeted maximum likelihood estimation for variable importance measure: Illustration for functional outcome prediction in mild traumatic brain injuries.

    PubMed

    Pirracchio, Romain; Yue, John K; Manley, Geoffrey T; van der Laan, Mark J; Hubbard, Alan E

    2016-06-29

    Standard statistical practice used for determining the relative importance of competing causes of disease typically relies on ad hoc methods, often byproducts of machine learning procedures (stepwise regression, random forest, etc.). Causal inference framework and data-adaptive methods may help to tailor parameters to match the clinical question and free one from arbitrary modeling assumptions. Our focus is on implementations of such semiparametric methods for a variable importance measure (VIM). We propose a fully automated procedure for VIM based on collaborative targeted maximum likelihood estimation (cTMLE), a method that optimizes the estimate of an association in the presence of potentially numerous competing causes. We applied the approach to data collected from traumatic brain injury patients, specifically a prospective, observational study including three US Level-1 trauma centers. The primary outcome was a disability score (Glasgow Outcome Scale - Extended (GOSE)) collected three months post-injury. We identified clinically important predictors among a set of risk factors using a variable importance analysis based on targeted maximum likelihood estimators (TMLE) and on cTMLE. Via a parametric bootstrap, we demonstrate that the latter procedure has the potential for robust automated estimation of variable importance measures based upon machine-learning algorithms. The cTMLE estimator was associated with substantially less positivity bias as compared to TMLE and larger coverage of the 95% CI. This study confirms the power of an automated cTMLE procedure that can target model selection via machine learning to estimate VIMs in complicated, high-dimensional data.

  20. Simultaneous Multiple Response Regression and Inverse Covariance Matrix Estimation via Penalized Gaussian Maximum Likelihood.

    PubMed

    Lee, Wonyul; Liu, Yufeng

    2012-10-01

    Multivariate regression is a common statistical tool for practical problems. Many multivariate regression techniques are designed for univariate response cases. For problems with multiple response variables available, one common approach is to apply the univariate response regression technique separately on each response variable. Although it is simple and popular, the univariate response approach ignores the joint information among response variables. In this paper, we propose three new methods for utilizing joint information among response variables. All methods are in a penalized likelihood framework with weighted L(1) regularization. The proposed methods provide sparse estimators of conditional inverse co-variance matrix of response vector given explanatory variables as well as sparse estimators of regression parameters. Our first approach is to estimate the regression coefficients with plug-in estimated inverse covariance matrices, and our second approach is to estimate the inverse covariance matrix with plug-in estimated regression parameters. Our third approach is to estimate both simultaneously. Asymptotic properties of these methods are explored. Our numerical examples demonstrate that the proposed methods perform competitively in terms of prediction, variable selection, as well as inverse covariance matrix estimation.

  1. Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.

    PubMed

    Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane

    2010-01-01

    An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.

  2. Maximum-likelihood scintillation detection for EM-CCD based gamma cameras.

    PubMed

    Korevaar, Marc A N; Goorden, Marlies C; Heemskerk, Jan W T; Beekman, Freek J

    2011-08-07

    Gamma cameras based on charge-coupled devices (CCDs) coupled to continuous scintillation crystals can combine a good detection efficiency with high spatial resolutions with the aid of advanced scintillation detection algorithms. A previously developed analytical multi-scale algorithm (MSA) models the depth-dependent light distribution but does not take statistics into account. Here we present and validate a novel statistical maximum-likelihood algorithm (MLA) that combines a realistic light distribution model with an experimentally validated statistical model. The MLA was tested for an electron multiplying CCD optically coupled to CsI(Tl) scintillators of different thicknesses. For (99m)Tc imaging, the spatial resolution (for perpendicular and oblique incidence), energy resolution and signal-to-background counts ratio (SBR) obtained with the MLA were compared with those of the MSA. Compared to the MSA, the MLA improves the energy resolution by more than a factor of 1.6 and the SBR is enhanced by more than a factor of 1.3. For oblique incidence (approximately 45°), the depth-of-interaction corrected spatial resolution is improved by a factor of at least 1.1, while for perpendicular incidence the MLA resolution does not consistently differ significantly from the MSA result for all tested scintillator thicknesses. For the thickest scintillator (3 mm, interaction probability 66% at 141 keV) a spatial resolution (perpendicular incidence) of 147 µm full width at half maximum (FWHM) was obtained with an energy resolution of 35.2% FWHM. These results of the MLA were achieved without prior calibration of scintillations as is needed for many statistical scintillation detection algorithms. We conclude that the MLA significantly improves the gamma camera performance compared to the MSA.

  3. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    PubMed Central

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-01-01

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use. PMID:27283980

  4. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  5. Application of a maximum likelihood algorithm to ultrasound modulated optical tomography.

    PubMed

    Huynh, Nam T; He, Diwei; Hayes-Gill, Barrie R; Crowe, John A; Walker, John G; Mather, Melissa L; Rose, Felicity R A J; Parker, Nicholas G; Povey, Malcolm J W; Morgan, Stephen P

    2012-02-01

    In pulsed ultrasound modulated optical tomography (USMOT), an ultrasound (US) pulse performs as a scanning probe within the sample as it propagates, modulating the scattered light spatially distributed along its propagation axis. Detecting and processing the modulated signal can provide a 1-dimensional image along the US axis. A simple model is developed wherein the detected signal is modelled as a convolution of the US pulse and the properties (ultrasonic/optical) of the medium along the US axis. Based upon this model, a maximum likelihood (ML) method for image reconstruction is established. For the first time to our knowledge, the ML technique for an USMOT signal is investigated both theoretically and experimentally. The ML method inverts the data to retrieve the spatially varying properties of the sample along the US axis, and a signal proportional to the optical properties can be acquired. Simulated results show that the ML method can serve as a useful reconstruction tool for a pulsed USMOT signal even when the signal-to-noise ratio (SNR) is close to unity. Experimental data using 5 cm thick tissue phantoms (scattering coefficient μ(s) = 6.5 cm(-1), anisotropy factor g=0.93) demonstrate that the axial resolution is 160 μm and the lateral resolution is 600 μm using a 10 MHz transducer.

  6. Maximum-likelihood q-estimator uncovers the role of potassium at neuromuscular junctions.

    PubMed

    da Silva, A J; Trindade, M A S; Santos, D O C; Lima, R F

    2016-02-01

    Recently, we demonstrated the existence of nonextensive behavior in neuromuscular transmission (da Silva et al. in Phys Rev E 84:041925, 2011). In this letter, we first obtain a maximum-likelihood q-estimator to calculate the scale factor ([Formula: see text]) and the q-index of q-Gaussian distributions. Next, we use the indexes to analyze spontaneous miniature end plate potentials in electrophysiological recordings from neuromuscular junctions. These calculations were performed assuming both normal and high extracellular potassium concentrations [Formula: see text]. This protocol was used to test the validity of Tsallis statistics under electrophysiological conditions closely resembling physiological stimuli. The analysis shows that q-indexes are distinct depending on the extracellular potassium concentration. Our letter provides a general way to obtain the best estimate of parameters from a q-Gaussian distribution function. It also expands the validity of Tsallis statistics in realistic physiological stimulus conditions. In addition, we discuss the physical and physiological implications of these findings.

  7. Maximum Likelihood Event Estimation and List-mode Image Reconstruction on GPU Hardware

    PubMed Central

    Caucci, Luca; Furenlid, Lars R.; Barrett, Harrison H.

    2010-01-01

    The scintillation detectors commonly used in SPECT and PET imaging and in Compton cameras require estimation of the position and energy of each gamma ray interaction. Ideally, this process would yield images with no spatial distortion and the best possible spatial resolution. In addition, especially for Compton cameras, the computation must yield the best possible estimate of the energy of each interacting gamma ray. These goals can be achieved by use of maximum-likelihood (ML) estimation of the event parameters, but in the past the search for an ML estimate has not been computationally feasible. Now, however, graphics processing units (GPUs) make it possible to produce optimal, real-time estimates of position and energy, even from scintillation cameras with a large number of photodetectors. In addition, the mathematical properties of ML estimates make them very attractive for use as list entries in list-mode ML image reconstruction. This two-step ML process—using ML estimation once to get the list data and again to reconstruct the object—allows accurate modeling of the detector blur and, potentially, considerable improvement in reconstructed spatial resolution. PMID:21278803

  8. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  9. Gutenberg-Richter b-value maximum likelihood estimation and sample size

    NASA Astrophysics Data System (ADS)

    Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.

    2017-01-01

    The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.

  10. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  11. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation

    PubMed Central

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set. PMID:23720703

  12. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  13. A maximum-likelihood search for neutrino point sources with the AMANDA-II detector

    NASA Astrophysics Data System (ADS)

    Braun, James R.

    Neutrino astronomy offers a new window to study the high energy universe. The AMANDA-II detector records neutrino-induced muon events in the ice sheet beneath the geographic South Pole, and has accumulated 3.8 years of livetime from 2000 - 2006. After reconstructing muon tracks and applying selection criteria, we arrive at a sample of 6595 events originating from the Northern Sky, predominantly atmospheric neutrinos with primary energy 100 GeV to 8 TeV. We search these events for evidence of astrophysical neutrino point sources using a maximum-likelihood method. No excess above the atmospheric neutrino background is found, and we set upper limits on neutrino fluxes. Finally, a well-known potential dark matter signature is emission of high energy neutrinos from annihilation of WIMPs gravitationally bound to the Sun. We search for high energy neutrinos from the Sun and find no excess. Our limits on WIMP-nucleon cross section set new constraints on MSSM parameter space.

  14. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  15. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  16. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  17. Statistical bounds and maximum likelihood performance for shot noise limited knife-edge modeled stellar occultation

    NASA Astrophysics Data System (ADS)

    McNicholl, Patrick J.; Crabtree, Peter N.

    2014-09-01

    Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.

  18. Improving water quality forecasting via data assimilation - Application of maximum likelihood ensemble filter to HSPF

    NASA Astrophysics Data System (ADS)

    Kim, Sunghee; Seo, Dong-Jun; Riazi, Hamideh; Shin, Changmin

    2014-11-01

    An ensemble data assimilation (DA) procedure is developed and evaluated for the Hydrologic Simulation Program - Fortran (HSPF), a widely used watershed water quality model. The procedure aims at improving the accuracy of short-range water quality prediction by updating the model initial conditions (IC) based on real-time observations of hydrologic and water quality variables. The observations assimilated include streamflow, biochemical oxygen demand (BOD), dissolved oxygen (DO), chlorophyll a (CHL-a), nitrate (NO3), phosphate (PO4) and water temperature (TW). The DA procedure uses the maximum likelihood ensemble filter (MLEF), which is capable of handling both nonlinear model dynamics and nonlinear observation equations, in a fixed-lag smoother formulation. For evaluation, the DA procedure was applied to the Kumho Catchment of the Nakdong River Basin in the Republic of Korea. A set of performance measures was used to evaluate analysis and prediction of streamflow and water quality variables. To remove systematic errors in the model simulation originating from structural and parametric errors, a parsimonious bias correction procedure is incorporated into the observation equation. The results show that the DA procedure substantially improves predictive skill for most variables; reduction in root mean square error ranges from 11% to 60% for Day-1 through 3 predictions for all observed variables except DO. It is seen that MLEF handles highly nonlinear hydrologic and biochemical observation equations very well, and that it is an effective DA technique for water quality forecasting.

  19. Maximum Likelihood Ensemble Filter-based Data Assimilation with HSPF for Improving Water Quality Forecasting

    NASA Astrophysics Data System (ADS)

    Kim, S.; Riazi, H.; Shin, C.; Seo, D.

    2013-12-01

    Due to the large dimensionality of the state vector and sparsity of observations, the initial conditions (IC) of water quality models are subject to large uncertainties. To reduce the IC uncertainties in operational water quality forecasting, an ensemble data assimilation (DA) procedure for the Hydrologic Simulation Program - Fortran (HSPF) model has been developed and evaluated for the Kumho River Subcatchment of the Nakdong River Basin in Korea. The procedure, referred to herein as MLEF-HSPF, uses maximum likelihood ensemble filter (MLEF) which combines strengths of variational assimilation (VAR) and ensemble Kalman filter (EnKF). The Control variables involved in the DA procedure include the bias correction factors for mean areal precipitation and mean areal potential evaporation, the hydrologic state variables, and the water quality state variables such as water temperature, dissolved oxygen (DO), biochemical oxygen demand (BOD), ammonium (NH4), nitrate (NO3), phosphate (PO4) and chlorophyll a (CHL-a). Due to the very large dimensionality of the inverse problem, accurately specifying the parameters for the DA procdedure is a challenge. Systematic sensitivity analysis is carried out for identifying the optimal parameter settings. To evaluate the robustness of MLEF-HSPF, we use multiple subcatchments of the Nakdong River Basin. In evaluation, we focus on the performance of MLEF-HSPF on prediction of extreme water quality events.

  20. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  1. MAXIMUM LIKELIHOOD FOREGROUND CLEANING FOR COSMIC MICROWAVE BACKGROUND POLARIMETERS IN THE PRESENCE OF SYSTEMATIC EFFECTS

    SciTech Connect

    Bao, C.; Hanany, S.; Baccigalupi, C.; Gold, B.; Jaffe, A.; Stompor, R.

    2016-03-01

    We extend a general maximum likelihood foreground estimation for cosmic microwave background (CMB) polarization data to include estimation of instrumental systematic effects. We focus on two particular effects: frequency band measurement uncertainty and instrumentally induced frequency dependent polarization rotation. We assess the bias induced on the estimation of the B-mode polarization signal by these two systematic effects in the presence of instrumental noise and uncertainties in the polarization and spectral index of Galactic dust. Degeneracies between uncertainties in the band and polarization angle calibration measurements and in the dust spectral index and polarization increase the uncertainty in the extracted CMB B-mode power, and may give rise to a biased estimate. We provide a quantitative assessment of the potential bias and increased uncertainty in an example experimental configuration. For example, we find that with 10% polarized dust, a tensor to scalar ratio of r = 0.05, and the instrumental configuration of the E and B experiment balloon payload, the estimated CMB B-mode power spectrum is recovered without bias when the frequency band measurement has 5% uncertainty or less, and the polarization angle calibration has an uncertainty of up to 4°.

  2. Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Imani, Mahdi; Braga-Neto, Ulisses M.

    2017-01-01

    Partially-observed Boolean dynamical systems (POBDS) are a general class of nonlinear models with application in estimation and control of Boolean processes based on noisy and incomplete measurements. The optimal minimum mean square error (MMSE) algorithms for POBDS state estimation, namely, the Boolean Kalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the case of large systems, due to computational and memory requirements. To address this, we propose approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method from sequential Monte-Carlo theory. These algorithms are used jointly with maximum-likelihood (ML) methods for simultaneous state and parameter estimation in POBDS models. In the presence of continuous parameters, ML estimation is performed using the expectation-maximization (EM) algorithm; we develop for this purpose a special smoother which reduces the computational complexity of the EM algorithm. The resulting particle-based adaptive filter is applied to a POBDS model of Boolean gene regulatory networks observed through noisy RNA-Seq time series data, and performance is assessed through a series of numerical experiments using the well-known cell cycle gene regulatory model.

  3. Maximum likelihood estimation of proton irradiated field and deposited dose distribution.

    PubMed

    Inaniwa, Taku; Kohno, Toshiyuki; Yamagata, Fumiko; Tomitani, Takehiro; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki; Urakabe, Eriko

    2007-05-01

    In proton therapy, it is important to evaluate the field irradiated with protons and the deposited dose distribution in a patient's body. Positron emitters generated through fragmentation reactions of target nuclei can be used for this purpose. By detecting the annihilation gamma rays from the positron emitters, the annihilation gamma ray distribution can be obtained which has information about the quantities essential to proton therapy. In this study, we performed irradiation experiments with mono-energetic proton beams of 160 MeV and the spread-out Bragg peak beams to three kinds of targets. The annihilation events were detected with a positron camera for 500 s after the irradiation and the annihilation gamma ray distributions were obtained. In order to evaluate the range and the position of distal and proximal edges of the SOBP, the maximum likelihood estimation (MLE) method was applied to the detected distributions. The evaluated values with the MLE method were compared with those estimated from the measured dose distributions. As a result, the ranges were determined with the difference between the MLE range and the experimental range less than 1.0 mm for all targets. For the SOBP beams, the positions of distal edges were determined with the difference less than 1.0 mm. On the other hand, the difference amounted to 7.9 mm for proximal edges.

  4. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  5. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    SciTech Connect

    Gelmini, Graciela B.; Georgescu, Andreea; Huh, Ji-Haeng; Gondolo, Paolo E-mail: a.georgescu@physics.ucla.edu E-mail: jhhuh@physics.ucla.edu

    2015-11-01

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio f{sub n}/f{sub p}=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f{sub n}/f{sub p}=−0.8.

  6. Sparse array 3-D ISAR imaging based on maximum likelihood estimation and CLEAN technique.

    PubMed

    Ma, Changzheng; Yeo, Tat Soon; Tan, Chee Seng; Tan, Hwee Siang

    2010-08-01

    Large 2-D sparse array provides high angular resolution microwave images but artifacts are also induced by the high sidelobes of the beam pattern, thus, limiting its dynamic range. CLEAN technique has been used in the literature to extract strong scatterers for use in subsequent signal cancelation (artifacts removal). However, the performance of DFT parameters estimation based CLEAN algorithm for the estimation of the signal amplitudes is known to be poor, and this affects the signal cancelation. In this paper, DFT is used only to provide the initial estimates, and the maximum likelihood parameters estimation method with steepest descent implementation is then used to improve the precision of the calculated scatterers positions and amplitudes. Time domain information is also used to reduce the sidelobe levels. As a result, clear, artifact-free images could be obtained. The effects of multiple reflections and rotation speed estimation error are also discussed. The proposed method has been verified using numerical simulations and it has been shown to be effective.

  7. Maximum likelihood estimation of proton irradiated field and deposited dose distribution

    SciTech Connect

    Inaniwa, Taku; Kohno, Toshiyuki; Yamagata, Fumiko; Tomitani, Takehiro; Sato, Shinji; Kanazawa, Mitsutaka; Kanai, Tatsuaki; Urakabe, Eriko

    2007-05-15

    In proton therapy, it is important to evaluate the field irradiated with protons and the deposited dose distribution in a patient's body. Positron emitters generated through fragmentation reactions of target nuclei can be used for this purpose. By detecting the annihilation gamma rays from the positron emitters, the annihilation gamma ray distribution can be obtained which has information about the quantities essential to proton therapy. In this study, we performed irradiation experiments with mono-energetic proton beams of 160 MeV and the spread-out Bragg peak beams to three kinds of targets. The annihilation events were detected with a positron camera for 500 s after the irradiation and the annihilation gamma ray distributions were obtained. In order to evaluate the range and the position of distal and proximal edges of the SOBP, the maximum likelihood estimation (MLE) method was applied to the detected distributions. The evaluated values with the MLE method were compared with those estimated from the measured dose distributions. As a result, the ranges were determined with the difference between the MLE range and the experimental range less than 1.0 mm for all targets. For the SOBP beams, the positions of distal edges were determined with the difference less than 1.0 mm. On the other hand, the difference amounted to 7.9 mm for proximal edges.

  8. Deconvolution of two-dimensional NMR spectra by fast maximum likelihood reconstruction: application to quantitative metabolomics.

    PubMed

    Chylla, Roger A; Hu, Kaifeng; Ellinger, James J; Markley, John L

    2011-06-15

    We have developed an algorithm called fast maximum likelihood reconstruction (FMLR) that performs spectral deconvolution of 1D-2D NMR spectra for the purpose of accurate signal quantification. FMLR constructs the simplest time-domain model (e.g., the model with the fewest number of signals and parameters) whose frequency spectrum matches the visible regions of the spectrum obtained from identical Fourier processing of the acquired data. We describe the application of FMLR to quantitative metabolomics and demonstrate the accuracy of the method by analysis of complex, synthetic mixtures of metabolites and liver extracts. The algorithm demonstrates greater accuracy (0.5-5.0% error) than peak height analysis and peak integral analysis with greatly reduced operator intervention. FMLR has been implemented in a Java-based framework that is available for download on multiple platforms and is interoperable with popular NMR display and processing software. Two-dimensional (1)H-(13)C spectra of mixtures can be acquired with acquisition times of 15 min and analyzed by FMLR in the range of 2-5 min per spectrum to identify and quantify constituents present at concentrations of 0.2 mM or greater.

  9. Joint maximum likelihood estimation of activation and Hemodynamic Response Function for fMRI.

    PubMed

    Bazargani, Negar; Nosratinia, Aria

    2014-07-01

    Blood Oxygen Level Dependent (BOLD) functional magnetic resonance imaging (fMRI) maps the brain activity by measuring blood oxygenation level, which is related to brain activity via a temporal impulse response function known as the Hemodynamic Response Function (HRF). The HRF varies from subject to subject and within areas of the brain, therefore a knowledge of HRF is necessary for accurately computing voxel activations. Conversely a knowledge of active voxels is highly beneficial for estimating the HRF. This work presents a joint maximum likelihood estimation of HRF and activation based on low-rank matrix approximations operating on regions of interest (ROI). Since each ROI has limited data, a smoothing constraint on the HRF is employed via Tikhonov regularization. The method is analyzed under both white noise and colored noise. Experiments with synthetic data show that accurate estimation of the HRF is possible with this method without prior assumptions on the exact shape of the HRF. Further experiments involving real fMRI experiments with auditory stimuli are used to validate the proposed method.

  10. Maximum likelihood estimation of vehicle position for outdoor image sensor-based visible light positioning system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang; Lin, Jiming

    2016-04-01

    Image sensor-based visible light positioning can be applied not only to indoor environments but also to outdoor environments. To determine the performance bounds of the positioning accuracy from the view of statistical optimization for an outdoor image sensor-based visible light positioning system, we analyze and derive the maximum likelihood estimation and corresponding Cramér-Rao lower bounds of vehicle position, under the condition that the observation values of the light-emitting diode (LED) imaging points are affected by white Gaussian noise. For typical parameters of an LED traffic light and in-vehicle camera image sensor, simulation results show that accurate estimates are available, with positioning error generally less than 0.1 m at a communication distance of 30 m between the LED array transmitter and the camera receiver. With the communication distance being constant, the positioning accuracy depends on the number of LEDs used, the focal length of the lens, the pixel size, and the frame rate of the camera receiver.

  11. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    SciTech Connect

    Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; Huh, Ji-Haeng

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark matter particles with elastic spin-independent interactions and neutron to proton coupling ratio f{sub n}/f{sub p}=−0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f{sub n}/f{sub p}=−0.8.

  12. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.

    2002-01-01

    The method of Maximum Likelihood (ML) is used to estimate the spectral parameters of an assumed broken power law energy spectrum from simulated detector responses. This methodology, which requires the complete specificity of all cosmic-ray detector design parameters, is shown to provide approximately unbiased, minimum variance, and normally distributed spectra information for events detected by an instrument having a wide range of commonly used detector response functions. The ML procedure, coupled with the simulated performance of a proposed space-based detector and its planned life cycle, has proved to be of significant value in the design phase of a new science instrument. The procedure helped make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope. This ML methodology is then generalized to estimate broken power law spectral parameters from real cosmic-ray data sets.

  13. The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm

    NASA Astrophysics Data System (ADS)

    Geng, Zexun; Xu, Qing; Zhang, Baoming; Gong, Zhihui

    2012-09-01

    Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML) algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the average contrast evaluation indexes.

  14. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  15. Computationally scalable adaptive image interpolation algorithm using maximum-likelihood denoising for real-time applications

    NASA Astrophysics Data System (ADS)

    Hung, Kwok-Wai; Siu, Wan-Chi

    2013-10-01

    Computationally scalable image interpolation algorithm is always desirable for software and hardware implementations on center processing unit (CPU), digital signal processor, field-programmable gate array, and low-cost hardware. A low-complexity, computationally scalable, and data-adaptive image interpolation algorithm that has a simple and homogeneous structure to efficiently scale the computation is proposed. Specifically, the image interpolation as a denoising problem is formulated by proposing a new image model to relate the observed low-resolution pixels and missing high-resolution pixels. Applying the maximum-likelihood estimation using the new image model results in an adaptive linear filter, where the filter coefficients depend on the local noise covariance matrix, which is estimated by local noise samples. Due to low overhead of the proposed interpolator, the overall computation efficiently scales with the number of noise samples. Experimental results show that the proposed scalable algorithm outperforms the state-of-the-art fast algorithms and achieves more than 36 frames per second for upscaling a 540 p (960×540) video to a 1080 p (1920×1080) video using multithreaded C++ software implementation on a PC system with Intel i7 950 3 GHz CPU.

  16. Performance, Accuracy, and Web Server for Evolutionary Placement of Short Sequence Reads under Maximum Likelihood

    PubMed Central

    Berger, Simon A.; Krompass, Denis; Stamatakis, Alexandros

    2011-01-01

    We present an evolutionary placement algorithm (EPA) and a Web server for the rapid assignment of sequence fragments (short reads) to edges of a given phylogenetic tree under the maximum-likelihood model. The accuracy of the algorithm is evaluated on several real-world data sets and compared with placement by pair-wise sequence comparison, using edit distances and BLAST. We introduce a slow and accurate as well as a fast and less accurate placement algorithm. For the slow algorithm, we develop additional heuristic techniques that yield almost the same run times as the fast version with only a small loss of accuracy. When those additional heuristics are employed, the run time of the more accurate algorithm is comparable with that of a simple BLAST search for data sets with a high number of short query sequences. Moreover, the accuracy of the EPA is significantly higher, in particular when the sample of taxa in the reference topology is sparse or inadequate. Our algorithm, which has been integrated into RAxML, therefore provides an equally fast but more accurate alternative to BLAST for tree-based inference of the evolutionary origin and composition of short sequence reads. We are also actively developing a Web server that offers a freely available service for computing read placements on trees using the EPA. PMID:21436105

  17. Genetic Risk and Longitudinal Disease Activity in Systemic Lupus Erythematosus Using Targeted Maximum Likelihood Estimation

    PubMed Central

    Gianfrancesco, Milena A.; Balzer, Laura; Taylor, Kimberly E.; Trupin, Laura; Nititham, Joanne; Seldin, Michael F.; Singer, Amanda Wheeler; Criswell, Lindsey A.; Barcellos, Lisa F.

    2016-01-01

    Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprised of 41 established SLE variants and clinically important disease activity as measured by the validated systemic lupus activity questionnaire (SLAQ) in a multi-ethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (> 4.0) for individuals with a high GRS compared to those with a low GRS across nine timepoints after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking, and education, as well as time-dependent confounding of missing visits. Individual SNP analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across timepoints 8 and 9 after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors. PMID:27467283

  18. Genetic risk and longitudinal disease activity in systemic lupus erythematosus using targeted maximum likelihood estimation.

    PubMed

    Gianfrancesco, M A; Balzer, L; Taylor, K E; Trupin, L; Nititham, J; Seldin, M F; Singer, A W; Criswell, L A; Barcellos, L F

    2016-09-01

    Systemic lupus erythematous (SLE) is a chronic autoimmune disease associated with genetic and environmental risk factors. However, the extent to which genetic risk is causally associated with disease activity is unknown. We utilized longitudinal-targeted maximum likelihood estimation to estimate the causal association between a genetic risk score (GRS) comprising 41 established SLE variants and clinically important disease activity as measured by the validated Systemic Lupus Activity Questionnaire (SLAQ) in a multiethnic cohort of 942 individuals with SLE. We did not find evidence of a clinically important SLAQ score difference (>4.0) for individuals with a high GRS compared with those with a low GRS across nine time points after controlling for sex, ancestry, renal status, dialysis, disease duration, treatment, depression, smoking and education, as well as time-dependent confounding of missing visits. Individual single-nucleotide polymorphism (SNP) analyses revealed that 12 of the 41 variants were significantly associated with clinically relevant changes in SLAQ scores across time points eight and nine after controlling for multiple testing. Results based on sophisticated causal modeling of longitudinal data in a large patient cohort suggest that individual SLE risk variants may influence disease activity over time. Our findings also emphasize a role for other biological or environmental factors.

  19. Evaluation of penalty design in penalized maximum-likelihood image reconstruction for lesion detection

    NASA Astrophysics Data System (ADS)

    Yang, Li; Ferrero, Andrea; Hagge, Rosalie J.; Badawi, Ramsey D.; Qi, Jinyi

    2014-03-01

    Detecting cancerous lesions is a major clinical application in emission tomography. In previous work, we have studied penalized maximum-likelihood (PML) image reconstruction for the detection task, where we used a multiview channelized Hotelling observer (mvCHO) to assess the lesion detectability in 3D images. It mimics the condition where a human observer examines three orthogonal views of a 3D image for lesion detection. We proposed a method to design a shift-variant quadratic penalty function to improve the detectability of lesions at unknown locations, and validated it using computer simulations. In this study we evaluated the bene t of the proposed penalty function for lesion detection using real data. A high-count real patient data with no identi able tumor inside the eld of view was used as the background data. A Na-22 point source was scanned in air at variable locations and the point source data were superimposed onto the patient data as arti cial lesions after being attenuated by the patient body. Independent Poisson noise was added to the high-count sinograms to generate 200 pairs of lesion-present and lesion-absent data sets, each mimicking a 5-minute scans. Lesion detectability was assessed using a multiview CHO and a human observer two alternative forced choice (2AFC) experiment. The results showed that the optimized penalty can improve lesion detection over the conventional quadratic penalty function.

  20. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  1. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  2. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  3. Maximum Likelihood Implementation of an Isolation-with-Migration Model for Three Species.

    PubMed

    Dalquen, Daniel A; Zhu, Tianqi; Yang, Ziheng

    2017-05-01

    We develop a maximum likelihood (ML) method for estimating migration rates between species using genomic sequence data. A species tree is used to accommodate the phylogenetic relationships among three species, allowing for migration between the two sister species, while the third species is used as an out-group. A Markov chain characterization of the genealogical process of coalescence and migration is used to integrate out the migration histories at each locus analytically, whereas Gaussian quadrature is used to integrate over the coalescent times on each genealogical tree numerically. This is an extension of our early implementation of the symmetrical isolation-with-migration model for three species to accommodate arbitrary loci with two or three sequences per locus and to allow asymmetrical migration rates. Our implementation can accommodate tens of thousands of loci, making it feasible to analyze genome-scale data sets to test for gene flow. We calculate the posterior probabilities of gene trees at individual loci to identify genomic regions that are likely to have been transferred between species due to gene flow. We conduct a simulation study to examine the statistical properties of the likelihood ratio test for gene flow between the two in-group species and of the ML estimates of model parameters such as the migration rate. Inclusion of data from a third out-group species is found to increase dramatically the power of the test and the precision of parameter estimation. We compiled and analyzed several genomic data sets from the Drosophila fruit flies. Our analyses suggest no migration from D. melanogaster to D. simulans, and a significant amount of gene flow from D. simulans to D. melanogaster, at the rate of ~0.02 migrant individuals per generation. We discuss the utility of the multispecies coalescent model for species tree estimation, accounting for incomplete lineage sorting and migration. © The Author(s) 2016. Published by Oxford University Press, on

  4. A maximum likelihood method for high resolution proton radiography/proton CT.

    PubMed

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao

    2016-12-07

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm(-1) to 4.53 lp cm(-1) in the 200 MeV beam and from 3.49 lp cm(-1) to 5.76 lp cm(-1) in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm(-1) to 5.76 lp cm(-1)) or conical beam (from 3.49 lp cm(-1) to 5.56 lp cm(-1)). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm(-1) for the parallel beam and from 3.03 to 5.15 lp cm(-1) for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  5. A maximum likelihood method for high resolution proton radiography/proton CT

    NASA Astrophysics Data System (ADS)

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao

    2016-12-01

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  6. Likelihood approach for evaluating bioequivalence of highly variable drugs

    PubMed Central

    Du, Liping; Choi, Leena

    2015-01-01

    Bioequivalence (BE) is required for approving a generic drug. The Two-One-Sided-Test (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration-time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross-over study with the typical number of subjects using the TOST due to lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference scaled average bioequivalence (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross-over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. PMID:25408492

  7. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    PubMed

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  8. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  9. STELLS2: fast and accurate coalescent-based maximum likelihood inference of species trees from gene tree topologies.

    PubMed

    Pei, Jingwen; Wu, Yufeng

    2017-06-15

    It is well known that gene trees and species trees may have different topologies. One explanation is incomplete lineage sorting, which is commonly modeled by the coalescent process. In multispecies coalescent, a gene tree topology is observed with some probability (called the gene tree probability) for a given species tree. Gene tree probability is the main tool for the program STELLS, which finds the maximum likelihood estimate of the species tree from the given gene tree topologies. However, STELLS becomes slow when data size increases. Recently, several fast species tree inference methods have been developed, which can handle large data. However, these methods often do not fully utilize the information in the gene trees. In this paper, we present an algorithm (called STELLS2) for computing the gene tree probability more efficiently than the original STELLS. The key idea of STELLS2 is taking some 'shortcuts' during the computation and computing the gene tree probability approximately. We apply the STELLS2 algorithm in the species tree inference approach in the original STELLS, which leads to a new maximum likelihood species tree inference method (also called STELLS2). Through simulation we demonstrate that the gene tree probabilities computed by STELLS2 and STELLS have strong correlation. We show that STELLS2 is almost as accurate in species tree inference as STELLS. Also STELLS2 is usually more accurate than several existing methods when there is one allele per species, although STELLS2 is slower than these methods. STELLS2 outperforms these methods significantly when there are multiple alleles per species. The program STELLS2 is available for download at: https://github.com/yufengwudcs/STELLS2. yufeng.wu@uconn.edu. Supplementary data are available at Bioinformatics online.

  10. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization.

    PubMed

    Stsepankou, D; Arns, A; Ng, S K; Zygmanski, P; Hesser, J

    2012-10-07

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system.

  11. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  12. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  13. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  14. Predicting bulk permeability using outcrop fracture attributes: The benefits of a Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; De Siena, L.

    2015-12-01

    The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.

  15. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    SciTech Connect

    Brendel, Bernhard; Teuffenbach, Maximilian von; Noël, Peter B.; Pfeiffer, Franz; Koehler, Thomas

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penalty comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.

  16. Bias Reduction for Low-Statistics PET: Maximum Likelihood Reconstruction With a Modified Poisson Distribution

    PubMed Central

    Van Slambrouck, Katrien; Stute, Simon; Comtat, Claude; Sibomana, Merence; van Velden, Floris H. P.; Boellaard, Ronald

    2015-01-01

    Positron emission tomography data are typically reconstructed with maximum likelihood expectation maximization (MLEM). However, MLEM suffers from positive bias due to the non-negativity constraint. This is particularly problematic for tracer kinetic modeling. Two reconstruction methods with bias reduction properties that do not use strict Poisson optimization are presented and compared to each other, to filtered backprojection (FBP), and to MLEM. The first method is an extension of NEGML, where the Poisson distribution is replaced by a Gaussian distribution for low count data points. The transition point between the Gaussian and the Poisson regime is a parameter of the model. The second method is a simplification of ABML. ABML has a lower and upper bound for the reconstructed image whereas AML has the upper bound set to infinity. AML uses a negative lower bound to obtain bias reduction properties. Different choices of the lower bound are studied. The parameter of both algorithms determines the effectiveness of the bias reduction and should be chosen large enough to ensure bias-free images. This means that both algorithms become more similar to least squares algorithms, which turned out to be necessary to obtain bias-free reconstructions. This comes at the cost of increased variance. Nevertheless, NEGML and AML have lower variance than FBP. Furthermore, randoms handling has a large influence on the bias. Reconstruction with smoothed randoms results in lower bias compared to reconstruction with unsmoothed randoms or randoms precorrected data. However, NEGML and AML yield both bias-free images for large values of their parameter. PMID:25137726

  17. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-02-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  18. Fusion of hyperspectral and lidar data based on dimension reduction and maximum likelihood

    NASA Astrophysics Data System (ADS)

    Abbasi, B.; Arefi, H.; Bigdeli, B.; Motagh, M.; Roessner, S.

    2015-04-01

    Limitations and deficiencies of different remote sensing sensors in extraction of different objects caused fusion of data from different sensors to become more widespread for improving classification results. Using a variety of data which are provided from different sensors, increase the spatial and the spectral accuracy. Lidar (Light Detection and Ranging) data fused together with hyperspectral images (HSI) provide rich data for classification of the surface objects. Lidar data representing high quality geometric information plays a key role for segmentation and classification of elevated features such as buildings and trees. On the other hand, hyperspectral data containing high spectral resolution would support high distinction between the objects having different spectral information such as soil, water, and grass. This paper presents a fusion methodology on Lidar and hyperspectral data for improving classification accuracy in urban areas. In first step, we applied feature extraction strategies on each data separately. In this step, texture features based on GLCM (Grey Level Co-occurrence Matrix) from Lidar data and PCA (Principal Component Analysis) and MNF (Minimum Noise Fraction) based dimension reduction methods for HSI are generated. In second step, a Maximum Likelihood (ML) based classification method is applied on each feature spaces. Finally, a fusion method is applied to fuse the results of classification. A co-registered hyperspectral and Lidar data from University of Houston was utilized to examine the result of the proposed method. This data contains nine classes: Building, Tree, Grass, Soil, Water, Road, Parking, Tennis Court and Running Track. Experimental investigation proves the improvement of classification accuracy to 88%.

  19. Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models

    PubMed Central

    Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark

    2015-01-01

    This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the

  20. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    SciTech Connect

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of

  1. Bias reduction for low-statistics PET: maximum likelihood reconstruction with a modified Poisson distribution.

    PubMed

    Van Slambrouck, Katrien; Stute, Simon; Comtat, Claude; Sibomana, Merence; van Velden, Floris H P; Boellaard, Ronald; Nuyts, Johan

    2015-01-01

    Positron emission tomography data are typically reconstructed with maximum likelihood expectation maximization (MLEM). However, MLEM suffers from positive bias due to the non-negativity constraint. This is particularly problematic for tracer kinetic modeling. Two reconstruction methods with bias reduction properties that do not use strict Poisson optimization are presented and compared to each other, to filtered backprojection (FBP), and to MLEM. The first method is an extension of NEGML, where the Poisson distribution is replaced by a Gaussian distribution for low count data points. The transition point between the Gaussian and the Poisson regime is a parameter of the model. The second method is a simplification of ABML. ABML has a lower and upper bound for the reconstructed image whereas AML has the upper bound set to infinity. AML uses a negative lower bound to obtain bias reduction properties. Different choices of the lower bound are studied. The parameter of both algorithms determines the effectiveness of the bias reduction and should be chosen large enough to ensure bias-free images. This means that both algorithms become more similar to least squares algorithms, which turned out to be necessary to obtain bias-free reconstructions. This comes at the cost of increased variance. Nevertheless, NEGML and AML have lower variance than FBP. Furthermore, randoms handling has a large influence on the bias. Reconstruction with smoothed randoms results in lower bias compared to reconstruction with unsmoothed randoms or randoms precorrected data. However, NEGML and AML yield both bias-free images for large values of their parameter.

  2. A maximum-likelihood multi-resolution weak lensing mass reconstruction method

    NASA Astrophysics Data System (ADS)

    Khiabanian, Hossein

    Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.

  3. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  4. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  5. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  6. Maximum likelihood estimation of linkage disequilibrium in half-sib families.

    PubMed

    Gomez-Raya, L

    2012-05-01

    Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ=0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium.

  7. Maximum Likelihood Estimation of Linkage Disequilibrium in Half-Sib Families

    PubMed Central

    Gomez-Raya, L.

    2012-01-01

    Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ = 0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium. PMID:22377635

  8. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGES

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally

  9. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    NASA Astrophysics Data System (ADS)

    Stsepankou, D.; Arns, A.; Ng, S. K.; Zygmanski, P.; Hesser, J.

    2012-10-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone-beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system.

  10. Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.

    PubMed

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A

    2009-06-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

  11. Maximum likelihood fitting of tidal streams with application to the Sagittarius dwarf tidal tails

    NASA Astrophysics Data System (ADS)

    Cole, Nathan

    2009-06-01

    A maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid is presented. Over small spatial extent, the tidal debris is modeled as a cylinder with density that falls off as a Gaussian with distance from its axis while the smooth component of the stellar spheroid is modeled as a Hernquist profile. The method is designed to use 2.5° wide stripes of data that follow great circles across the sky in which the tidal debris within each stripe is fit separately. A probabilistic separation technique which allows for the extraction of the optimized tidal streams from the input data set is presented. This technique allows for the creation of separate catalogs for each component fit in the stellar spheroid: one catalog for each piece of tidal debris that fits the density profile of the debris and a single catalog which fits the density profile of the smooth stellar spheroid component. This separation technique is proven to be effective by extracting the simulated tidal debris from the simulated datasets. A method to determine the statistical errors is also developed which utilizes a Hessian matrix to determine the width of the peak at the maximum of the likelihood surface. This error analysis method serves as a means of testing the the algorithm with regard to the simulated datasets as well as determining the statistical errors of the optimizations over observational data. An heuristic method is also defined for determining the numerical error in the optimizations. The maximum likelihood algorithm is then used to optimize spatial data taken from the Sloan Digital Sky Survey. Stars having the color of blue F turnoff stars 0.1 < ( g - r ) 0 < 0.3 and ( u - g ) 0 > 0.4 are extracted from the Sloan Digital Sky Survey database. In the algorithm, the absolute magnitude distribution of F turnoff stars is modeled as a Gaussian distribution, which is an improvement over previous methods which utilize a fixed absolute magnitude M g 0

  12. A Quasi-Likelihood Approach to Nonnegative Matrix Factorization.

    PubMed

    Devarajan, Karthik; Cheung, Vincent C K

    2016-08-01

    A unified approach to nonnegative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proved using the expectation-maximization algorithm. In addition, a measure to evaluate the goodness of fit of the resulting factorization is described. The proposed methods allow modeling of nonlinear effects using appropriate link functions and are illustrated using an application in biomedical signal processing.

  13. A composite likelihood approach for spatially correlated survival data.

    PubMed

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  14. A composite likelihood approach for spatially correlated survival data

    PubMed Central

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  15. Calculating the evolutionary rates of different genes: a fast, accurate estimator with applications to maximum likelihood phylogenetic analysis.

    PubMed

    Bevan, Rachel B; Lang, B Franz; Bryant, David

    2005-12-01

    In phylogenetic analyses with combined multigene or multiprotein data sets, accounting for differing evolutionary dynamics at different loci is essential for accurate tree prediction. Existing maximum likelihood (ML) and Bayesian approaches are computationally intensive. We present an alternative approach that is orders of magnitude faster. The method, Distance Rates (DistR), estimates rates based upon distances derived from gene/protein sequence data. Simulation studies indicate that this technique is accurate compared with other methods and robust to missing sequence data. The DistR method was applied to a fungal mitochondrial data set, and the rate estimates compared well to those obtained using existing ML and Bayesian approaches. Inclusion of the protein rates estimated from the DistR method into the ML calculation of trees as a branch length multiplier resulted in a significantly improved fit as measured by the Akaike Information Criterion (AIC). Furthermore, bootstrap support for the ML topology was significantly greater when protein rates were used, and some evident errors in the concatenated ML tree topology (i.e., without protein rates) were corrected. [Bayesian credible intervals; DistR method; multigene phylogeny; PHYML; rate heterogeneity.].

  16. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  17. Efficient Full Information Maximum Likelihood Estimation for Multidimensional IRT Models. Research Report. ETS RR-09-03

    ERIC Educational Resources Information Center

    Rijmen, Frank

    2009-01-01

    Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…

  18. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  19. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    NASA Astrophysics Data System (ADS)

    Song, N.; He, B.; Wahl, R. L.; Frey, E. C.

    2011-09-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  20. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections.

    PubMed

    Song, N; He, B; Wahl, R L; Frey, E C

    2011-09-07

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  1. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from

  2. Statistical analysis of COMPTEL maximum likelihood-ratio distributions: evidence for a signal from previously undetected AGN

    SciTech Connect

    Williams, O. R.; Bennett, K.; Much, R.; Schoenfelder, V.; Blom, J. J.; Ryan, J.

    1997-05-10

    The maximum likelihood-ratio method is frequently used in COMPTEL analysis to determine the significance of a point source at a given location. In this paper we do not consider whether the likelihood-ratio at a particular location indicates a detection, but rather whether distributions of likelihood-ratios derived from many locations depart from that expected for source free data. We have constructed distributions of likelihood-ratios by reading values from standard COMPTEL maximum-likelihood ratio maps at positions corresponding to the locations of different categories of AGN. Distributions derived from the locations of Seyfert galaxies are indistinguishable, according to a Kolmogorov-Smirnov test, from those obtained from ''random'' locations, but differ slightly from those obtained from the locations of flat spectrum radio loud quasars, OVVs, and BL Lac objects. This difference is not due to known COMPTEL sources, since regions near these sources are excluded from the analysis. We suggest that it might arise from a number of sources with fluxes below the COMPTEL detection threshold.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  4. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  5. Maximum Likelihood Comparison of Tully-Fisher and Redshift Data. II. Results from an Expanded Sample

    NASA Astrophysics Data System (ADS)

    Willick, Jeffrey A.; Strauss, Michael A.

    1998-11-01

    This is the second in a series of papers in which we compare Tully-Fisher (TF) data from the Mark III Catalog with predicted peculiar velocities based on the IRAS galaxy redshift survey and gravitational instability theory, using a rigorous maximum likelihood method called VELMOD. In the first paper in this series, we applied the method to a czLG <= 3000 km s-1, 838 galaxy TF sample and found βI = 0.49 +/- 0.07, where βI ≡ Ω0.6/bI and bI is the linear biasing parameter for IRAS galaxies. In this paper we increase the redshift limit to czLG = 7500 km s-1, thereby enlarging the sample to 1876 galaxies. The expanded sample now includes the Willick Pisces-Perseus (W91PP) and Courteau-Faber (CF) subsamples of the Mark III Catalog, in addition to the Aaronson et al. (A82) and Mathewson et al. (MAT) subsamples already considered in the first paper in this series. We implement VELMOD using both the forward and inverse forms of the TF relation and allow for a more general form of the quadrupole velocity residual than considered in the first paper in this series. We find βI = 0.50 +/- 0.04 (1 σ error) at 300 km s-1 smoothing of the IRAS-predicted velocity field. The fit residuals are spatially incoherent for βI = 0.5, which indicates that the IRAS plus quadrupole velocity field is a good fit to the TF data. If we eliminate the quadrupole we obtain a worse fit but a similar value for βI of 0.54 +/- 0.04. Changing the IRAS smoothing scale to 500 km s-1 has almost no effect on the best βI. Thus the data are consistent with a model in which the cosmological density parameter Ω ~ 0.3, and IRAS galaxies are unbiased, bI = 1. We find evidence for a density dependence of the small-scale velocity dispersion, σv(δg) ~= (100 + 35δg) km s-1. One of the byproducts of the VELMOD method is a self-consistent calibration of the TF relation. We confirm our result from the first paper in this series that the TF relations for the A82 and MAT samples found by VELMOD are consistent

  6. A Unified Maximum Likelihood Framework for Simultaneous Motion and $T_{1}$ Estimation in Quantitative MR $T_{1}$ Mapping.

    PubMed

    Ramos-Llorden, Gabriel; den Dekker, Arnold J; Van Steenkiste, Gwendolyn; Jeurissen, Ben; Vanhevel, Floris; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan

    2017-02-01

    In quantitative MR T1 mapping, the spin-lattice relaxation time T1 of tissues is estimated from a series of T1 -weighted images. As the T1 estimation is a voxel-wise estimation procedure, correct spatial alignment of the T1 -weighted images is crucial. Conventionally, the T1 -weighted images are first registered based on a general-purpose registration metric, after which the T1 map is estimated. However, as demonstrated in this paper, such a two-step approach leads to a bias in the final T1 map. In our work, instead of considering motion correction as a preprocessing step, we recover the motion-free T1 map using a unified estimation approach. In particular, we propose a unified framework where the motion parameters and the T1 map are simultaneously estimated with a Maximum Likelihood (ML) estimator. With our framework, the relaxation model, the motion model as well as the data statistics are jointly incorporated to provide substantially more accurate motion and T1 parameter estimates. Experiments with realistic Monte Carlo simulations show that the proposed unified ML framework outperforms the conventional two-step approach as well as state-of-the-art model-based approaches, in terms of both motion and T1 map accuracy and mean-square error. Furthermore, the proposed method was additionally validated in a controlled experiment with real T1 -weighted data and with two in vivo human brain T1 -weighted data sets, showing its applicability in real-life scenarios.

  7. A Block Successive Lower-Bound Maximization Algorithm for the Maximum Pseudo-Likelihood Estimation of Fully Visible Boltzmann Machines.

    PubMed

    Nguyen, Hien D; Wood, Ian A

    2016-03-01

    Maximum pseudo-likelihood estimation (MPLE) is an attractive method for training fully visible Boltzmann machines (FVBMs) due to its computational scalability and the desirable statistical properties of the MPLE. No published algorithms for MPLE have been proven to be convergent or monotonic. In this note, we present an algorithm for the MPLE of FVBMs based on the block successive lower-bound maximization (BSLM) principle. We show that the BSLM algorithm monotonically increases the pseudo-likelihood values and that the sequence of BSLM estimates converges to the unique global maximizer of the pseudo-likelihood function. The relationship between the BSLM algorithm and the gradient ascent (GA) algorithm for MPLE of FVBMs is also discussed, and a convergence criterion for the GA algorithm is given.

  8. MOSAIC_SSD: a new web tool for species sensitivity distribution to include censored data by maximum likelihood.

    PubMed

    Kon Kam King, Guillaume; Veber, Philippe; Charles, Sandrine; Delignette-Muller, Marie Laure

    2014-09-01

    Censored data are seldom taken into account in species sensitivity distribution (SSD) analysis. However, they are found in virtually every dataset and sometimes represent the better part of the data. Stringent recommendations on data quality often entail discarding a lot of these meaningful data, resulting in datasets of reduced size which lack representativeness of any realistic community. However, it is reasonably simple to include censored data in SSD by using an extension of the standard maximum likelihood method. The authors detail this approach based on the use of the R-package fitdistrplus, dedicated to the fit of parametric probability distributions. The authors present the new Web tool MOSAIC_SSD, that can fit an SSD on datasets containing any type of data, censored or not. The MOSAIC_SSD Web tool predicts any hazardous concentration and provides bootstrap confidence intervals on the predictions. Finally, the authors illustrate the added value of including censored data in SSD, taking examples from published data. © 2014 SETAC.

  9. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated.

    PubMed

    Langlois, Dominic; Cousineau, Denis; Thivierge, J P

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  10. Maximum likelihood estimators for truncated and censored power-law distributions show how neuronal avalanches may be misevaluated

    NASA Astrophysics Data System (ADS)

    Langlois, Dominic; Cousineau, Denis; Thivierge, J. P.

    2014-01-01

    The coordination of activity amongst populations of neurons in the brain is critical to cognition and behavior. One form of coordinated activity that has been widely studied in recent years is the so-called neuronal avalanche, whereby ongoing bursts of activity follow a power-law distribution. Avalanches that follow a power law are not unique to neuroscience, but arise in a broad range of natural systems, including earthquakes, magnetic fields, biological extinctions, fluid dynamics, and superconductors. Here, we show that common techniques that estimate this distribution fail to take into account important characteristics of the data and may lead to a sizable misestimation of the slope of power laws. We develop an alternative series of maximum likelihood estimators for discrete, continuous, bounded, and censored data. Using numerical simulations, we show that these estimators lead to accurate evaluations of power-law distributions, improving on common approaches. Next, we apply these estimators to recordings of in vitro rat neocortical activity. We show that different estimators lead to marked discrepancies in the evaluation of power-law distributions. These results call into question a broad range of findings that may misestimate the slope of power laws by failing to take into account key aspects of the observed data.

  11. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    PubMed

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  12. A maximum-likelihood method to correct for allelic dropout in microsatellite data with no replicate genotypes.

    PubMed

    Wang, Chaolong; Schroeder, Kari B; Rosenberg, Noah A

    2012-10-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy-Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  13. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    PubMed Central

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  14. Noise Estimation and Reduction in Magnetic Resonance Imaging Using a New Multispectral Nonlocal Maximum-likelihood Filter.

    PubMed

    Bouhrara, Mustapha; Bonny, Jean-Marie; Ashinsky, Beth G; Maring, Michael C; Spencer, Richard G

    2017-01-01

    Denoising of magnetic resonance (MR) images enhances diagnostic accuracy, the quality of image manipulations such as registration and segmentation, and parameter estimation. The first objective of this paper is to introduce a new, high-performance, nonlocal filter for noise reduction in MR image sets consisting of progressively-weighted, that is, multispectral, images. This filter is a multispectral extension of the nonlocal maximum likelihood filter (NLML). Performance was evaluated on synthetic and in-vivo T2 - and T1 -weighted brain imaging data, and compared to the nonlocal-means (NLM) and its multispectral version, that is, MS-NLM, and the nonlocal maximum likelihood (NLML) filters. Visual inspection of filtered images and quantitative analyses showed that all filters provided substantial reduction of noise. Further, as expected, the use of multispectral information improves filtering quality. In addition, numerical and experimental analyses indicated that the new multispectral NLML filter, MS-NLML, demonstrated markedly less blurring and loss of image detail than seen with the other filters evaluated. In addition, since noise standard deviation (SD) is an important parameter for all of these nonlocal filters, a multispectral extension of the method of maximum likelihood estimation (MLE) of noise amplitude is presented and compared to both local and nonlocal MLE methods. Numerical and experimental analyses indicated the superior performance of this multispectral method for estimation of noise SD.

  15. Asymptotic efficiency of the pseudo-maximum likelihood estimator in multi-group factor models with pooled data.

    PubMed

    Jin, Shaobo; Yang-Wallentin, Fan; Christoffersson, Anders

    2015-05-15

    A multi-group factor model is suitable for data originating from different strata. However, it often requires a relatively large sample size to avoid numerical issues such as non-convergence and non-positive definite covariance matrices. An alternative is to pool data from different groups in which a single-group factor model is fitted to the pooled data using maximum likelihood. In this paper, properties of pseudo-maximum likelihood (PML) estimators for pooled data are studied. The pooled data are assumed to be normally distributed from a single group. The resulting asymptotic efficiency of the PML estimators of factor loadings is compared with that of the multi-group maximum likelihood estimators. The effect of pooling is investigated through a two-group factor model. The variances of factor loadings for the pooled data are underestimated under the normal theory when error variances in the smaller group are larger. Underestimation is due to dependence between the pooled factors and pooled error terms. Small-sample properties of the PML estimators are also investigated using a Monte Carlo study.

  16. Maximum likelihood failure detection techniques applied to the shuttle orbiter reaction control subsystem

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.; Deyst, J. J.

    1975-01-01

    A technique for on-board detection and identification of hard failures and leaks of the shuttle orbiter reaction control subsystem jets, during the orbital flight phase, is presented. The method uses gimbal angle and linear accelerometer measurements from the orbiter inertial measurement unit and requires no additional hardware. Extended Kalman filters with residual traps are employed for state estimation, and generalized likelihood ratio tests for jet failure identification. Rigid body simulation results indicate identification times of less than 2 seconds for hard jet failures and less than 70 seconds for jet leaks.

  17. A Maximum Likelihood Method for Reconstruction of the Evolution of Eukaryotic Gene Structure

    PubMed Central

    Carmel, Liran; Rogozin, Igor B.; Wolf, Yuri I.; Koonin, Eugene V.

    2012-01-01

    Spliceosomal introns are one of the principal distinctive features of eukaryotes. Nevertheless, different large-scale studies disagree about even the most basic features of their evolution. In order to come up with a more reliable reconstruction of intron evolution, we developed a model that is far more comprehensive than previous ones. This model is rich in parameters, and estimating them accurately is infeasible by straightforward likelihood maximization. Thus, we have developed an expectation-maximization algorithm that allows for efficient maximization. Here, we outline the model and describe the expectation-maximization algorithm in detail. Since the method works with intron presence–absence maps, it is expected to be instrumental for the analysis of the evolution of other binary characters as well. PMID:19381540

  18. Nonparametric Maximum Penalized Likelihood Estimation of a Density from Arbitrarily Right-Censored Observations.

    DTIC Science & Technology

    1984-10-01

    and second Fr~ chet derivatives of 3(v) 1 n L(v) are given by (Tapia, 1971) n d I n(xi) n (1- d )I X I iv MxZ) ’-t)dt -2 < v,rn and n d 2. (X-n j~i (v)i...8217 first APLE; SurvivaZ e.t..a- tion; Random censor-hip; Nonparaet"c density estimation; Reliability. AB STRACT D Based on arbitrarily right-censored...functional 0: H(n) -I R. Given the arbitrarily right-censored sample (xi,dt), i11,2,... ,n, the #-penalized likelihood of v c H(n) is defined by %I n d

  19. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data.

  20. Enhancing resolution and contrast in second-harmonic generation microscopy using an advanced maximum likelihood estimation restoration method

    NASA Astrophysics Data System (ADS)

    Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.

    2017-02-01

    Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.

  1. The evolution of autodigestion in the mushroom family Psathyrellaceae (Agaricales) inferred from Maximum Likelihood and Bayesian methods.

    PubMed

    Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba

    2010-12-01

    Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Generalised Linear Models Incorporating Population Level Information: An Empirical Likelihood Based Approach

    PubMed Central

    Chaudhuri, Sanjay; Handcock, Mark S.; Rendall, Michael S.

    2011-01-01

    In many situations information from a sample of individuals can be supplemented by population level information on the relationship between a dependent variable and explanatory variables. Inclusion of the population level information can reduce bias and increase the efficiency of the parameter estimates. Population level information can be incorporated via constraints on functions of the model parameters. In general the constraints are nonlinear making the task of maximum likelihood estimation harder. In this paper we develop an alternative approach exploiting the notion of an empirical likelihood. It is shown that within the framework of generalised linear models, the population level information corresponds to linear constraints, which are comparatively easy to handle. We provide a two-step algorithm that produces parameter estimates using only unconstrained estimation. We also provide computable expressions for the standard errors. We give an application to demographic hazard modelling by combining panel survey data with birth registration data to estimate annual birth probabilities by parity. PMID:22740776

  3. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics.

    PubMed

    Gruber, Susan; van der Laan, Mark J

    2010-01-01

    A concrete example of the collaborative double-robust targeted likelihood estimator (C-TMLE) introduced in a companion article in this issue is presented, and applied to the estimation of causal effects and variable importance parameters in genomic data. The focus is on non-parametric estimation in a point treatment data structure. Simulations illustrate the performance of C-TMLE relative to current competitors such as the augmented inverse probability of treatment weighted estimator that relies on an external non-collaborative estimator of the treatment mechanism, and inefficient estimation procedures including propensity score matching and standard inverse probability of treatment weighting. C-TMLE is also applied to the estimation of the covariate-adjusted marginal effect of individual HIV mutations on resistance to the anti-retroviral drug lopinavir. The influence curve of the C-TMLE is used to establish asymptotically valid statistical inference. The list of mutations found to have a statistically significant association with resistance is in excellent agreement with mutation scores provided by the Stanford HIVdb mutation scores database.

  4. Estimation of effective population size of HIV-1 within a host: a pseudomaximum-likelihood approach.

    PubMed Central

    Seo, Tae-Kun; Thorne, Jeffrey L; Hasegawa, Masami; Kishino, Hirohisa

    2002-01-01

    Using pseudomaximum-likelihood approaches to phylogenetic inference and coalescent theory, we develop a computationally tractable method of estimating effective population size from serially sampled viral data. We show that the variance of the maximum-likelihood estimator of effective population size depends on the serial sampling design only because internal node times on a coalescent genealogy can be better estimated with some designs than with others. Given the internal node times and the number of sequences sampled, the variance of the maximum-likelihood estimator is independent of the serial sampling design. We then estimate the effective size of the HIV-1 population within nine hosts. If we assume that the mutation rate is 2.5 x 10(-5) substitutions/generation and is the same in all patients, estimated generation lengths vary from 0.73 to 2.43 days/generation and the mean (1.47) is similar to the generation lengths estimated by other researchers. If we assume that generation length is 1.47 days and is the same in all patients, mutation rate estimates vary from 1.52 x 10(-5) to 5.02 x 10(-5). Our results indicate that effective viral population size and evolutionary rate per year are negatively correlated among HIV-1 patients. PMID:11973287

  5. Can quartet analyses combining maximum likelihood estimation and Hennigian logic overcome long branch attraction in phylogenomic sequence data?

    PubMed

    Kück, Patrick; Wilkinson, Mark; Groß, Christian; Foster, Peter G; Wägele, Johann W

    2017-01-01

    Systematic biases such as long branch attraction can mislead commonly relied upon model-based (i.e. maximum likelihood and Bayesian) phylogenetic methods when, as is usually the case with empirical data, there is model misspecification. We present PhyQuart, a new method for evaluating the three possible binary trees for any quartet of taxa. PhyQuart was developed through a process of reciprocal illumination between a priori considerations and the results of extensive simulations. It is based on identification of site-patterns that can be considered to support a particular quartet tree taking into account the Hennigian distinction between apomorphic and plesiomorphic similarity, and employing corrections to the raw observed frequencies of site-patterns that exploit expectations from maximum likelihood estimation. We demonstrate through extensive simulation experiments that, whereas maximum likeilihood estimation performs well in many cases, it can be outperformed by PhyQuart in cases where it fails due to extreme branch length asymmetries producing long-branch attraction artefacts where there is only very minor model misspecification.

  6. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  7. Maximum-likelihood estimation of familial correlations from multivariate quantitative data on pedigrees: a general method and examples.

    PubMed Central

    Rao, D C; Vogler, G P; McGue, M; Russell, J M

    1987-01-01

    A general method for maximum-likelihood estimation of familial correlations from pedigree data is presented. The method is applicable to any type of data structure, including pedigrees in which variable numbers of individuals are present within classes of relatives, data in which multiple phenotypic measures are obtained on each individual, and multiple group analyses in which some correlations are equated across groups. The method is applied to data on high-density lipoprotein cholesterol and total cholesterol levels obtained from participants in the Swedish Twin Family Study. Results indicate that there is strong familial resemblance for both traits but little cross-trait resemblance. PMID:3687943

  8. Maximum likelihood estimate of target angles for a conical scan tracking system in the presence of speckle.

    PubMed

    Lubnau, D G

    1977-01-01

    The equation for the maximum likelihood estimate of target angle is derived for a conical scan tracking system when the target produces speckle and Gaussian noise is present. Operation with a direct detection receiver is assumed with the average photon flux large enough so that the discrete nature of photoelectric events may be ignored. For large average SNRs, the estimate is shown to be unbiased and the variance of the estimate limited by both the average SNR and the number of degrees of freedom of the detected field.

  9. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  10. Maximum Likelihood Expectation-Maximization Algorithms Applied to Localization and Identification of Radioactive Sources with Recent Coded Mask Gamma Cameras

    SciTech Connect

    Lemaire, H.; Barat, E.; Carrel, F.; Dautremer, T.; Dubos, S.; Limousin, O.; Montagu, T.; Normand, S.; Schoepff, V.; Amgarou, K.; Menaa, N.; Angelique, J.-C.; Patoz, A.

    2015-07-01

    In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)

  11. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.

  12. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    PubMed Central

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2009-01-01

    Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs). Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis) is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS) process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees. PMID:19737395

  13. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  14. MEMLET: An Easy-to-Use Tool for Data Fitting and Model Comparison Using Maximum-Likelihood Estimation.

    PubMed

    Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael

    2016-07-26

    We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  16. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    PubMed

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  17. A flexible decision-aided maximum likelihood phase estimation in hybrid QPSK/OOK coherent optical WDM systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Yulong

    2016-04-01

    Although decision-aided (DA) maximum likelihood (ML) phase estimation (PE) algorithm has been investigated intensively, block length effect impacts system performance and leads to the increasing of hardware complexity. In this paper, a flexible DA-ML algorithm is proposed in hybrid QPSK/OOK coherent optical wavelength division multiplexed (WDM) systems. We present a general cross phase modulation (XPM) model based on Volterra series transfer function (VSTF) method to describe XPM effects induced by OOK channels at the end of dispersion management (DM) fiber links. Based on our model, the weighted factors obtained from maximum likelihood method are introduced to eliminate the block length effect. We derive the analytical expression of phase error variance for the performance prediction of coherent receiver with the flexible DA-ML algorithm. Bit error ratio (BER) performance is evaluated and compared through both theoretical derivation and Monte Carlo (MC) simulation. The results show that our flexible DA-ML algorithm has significant improvement in performance compared with the conventional DA-ML algorithm as block length is a fixed value. Compared with the conventional DA-ML with optimum block length, our flexible DA-ML can obtain better system performance. It means our flexible DA-ML algorithm is more effective for mitigating phase noise than conventional DA-ML algorithm.

  18. Important factors in the maximum likelihood analysis of flight test maneuvers

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.; Montgomery, T. D.

    1979-01-01

    The information presented is based on the experience in the past 12 years at the NASA Dryden Flight Research Center of estimating stability and control derivatives from over 3500 maneuvers from 32 aircraft. The overall approach to the analysis of dynamic flight test data is outlined. General requirements for data and instrumentation are discussed and several examples of the types of problems that may be encountered are presented.

  19. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results

  20. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation

    PubMed Central

    Mazza, Gina L.; Enders, Craig K.; Ruehlman, Linda S.

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002; Graham, 2009; Enders, 2010). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe an FIML approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249