Science.gov

Sample records for maximum likelihood approach

  1. A Maximum Likelihood Approach to Correlational Outlier Identification.

    ERIC Educational Resources Information Center

    Bacon, Donald R.

    1995-01-01

    A maximum likelihood approach to correlational outlier identification is introduced and compared to the Mahalanobis D squared and Comrey D statistics through Monte Carlo simulation. Identification performance depends on the nature of correlational outliers and the measure used, but the maximum likelihood approach is the most robust performance…

  2. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  3. A maximum likelihood approach to the inverse problem of scatterometry.

    PubMed

    Henn, Mark-Alexander; Gross, Hermann; Scholze, Frank; Wurm, Matthias; Elster, Clemens; Bär, Markus

    2012-06-01

    Scatterometry is frequently used as a non-imaging indirect optical method to reconstruct the critical dimensions (CD) of periodic nanostructures. A particular promising direction is EUV scatterometry with wavelengths in the range of 13 - 14 nm. The conventional approach to determine CDs is the minimization of a least squares function (LSQ). In this paper, we introduce an alternative method based on the maximum likelihood estimation (MLE) that determines the statistical error model parameters directly from measurement data. By using simulation data, we show that the MLE method is able to correct the systematic errors present in LSQ results and improves the accuracy of scatterometry. In a second step, the MLE approach is applied to measurement data from both extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Using MLE removes the systematic disagreement of EUV with other methods such as scanning electron microscopy and gives consistent results for DUV. PMID:22714306

  4. Maximum-likelihood approach to strain imaging using ultrasound

    PubMed Central

    Insana, M. F.; Cook, L. T.; Bilgen, M.; Chaturvedi, P.; Zhu, Y.

    2009-01-01

    A maximum-likelihood (ML) strategy for strain estimation is presented as a framework for designing and evaluating bioelasticity imaging systems. Concepts from continuum mechanics, signal analysis, and acoustic scattering are combined to develop a mathematical model of the ultrasonic waveforms used to form strain images. The model includes three-dimensional (3-D) object motion described by affine transformations, Rayleigh scattering from random media, and 3-D system response functions. The likelihood function for these waveforms is derived to express the Fisher information matrix and variance bounds for displacement and strain estimation. The ML estimator is a generalized cross correlator for pre- and post-compression echo waveforms that is realized by waveform warping and filtering prior to cross correlation and peak detection. Experiments involving soft tissuelike media show the ML estimator approaches the Cramér–Rao error bound for small scaling deformations: at 5 MHz and 1.2% compression, the predicted lower bound for displacement errors is 4.4 µm and the measured standard deviation is 5.7 µm. PMID:10738797

  5. A maximum likelihood approach to estimating correlation functions

    SciTech Connect

    Baxter, Eric Jones; Rozo, Eduardo

    2013-12-10

    We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.

  6. An independent sequential maximum likelihood approach to simultaneous track-to-track association and bias removal

    NASA Astrophysics Data System (ADS)

    Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang

    2015-12-01

    In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.

  7. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  8. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  9. A maximum likelihood approach to estimating articulator positions from speech acoustics

    SciTech Connect

    Hogden, J.

    1996-09-23

    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  10. A maximum likelihood approach to jointly estimating seasonal and annual flood frequency distributions

    NASA Astrophysics Data System (ADS)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.

    2012-04-01

    Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.

  11. Maximum likelihood approach for the adaptive optics point spread function reconstruction

    NASA Astrophysics Data System (ADS)

    Exposito, J.; Gratadour, Damien; Rousset, Gérard; Clénet, Yann; Mugnier, Laurent; Gendron, Éric

    2014-08-01

    This paper is dedicated to a new PSF reconstruction method based on a maximum likelihood approach (ML) which uses as well the telemetry data of the AO system (see Exposito et al. (2013)1). This approach allows a joint-estimation of the covariance matrix of the mirror modes of the residual phase, the noise variance and the Fried parameter r0. In this method, an estimate of the covariance between the parallel residual phase and the orthogonal phase is required. We developed a recursive approach taking into account the temporal effect of the AO-loop, so that this covariance only depends on the r0, the wind speed and some of the parameters of the system (the gain of the loop, the interaction matrix and the command matrix). With this estimation, the high bandwidth hypothesis is no longer required to reconstruct the PSF with a good accuracy. We present the validation of the method and the results on numerical simulations (on a SCAO system) and show that our ML method allows an accurate estimation of the PSF in the case of a Shack-Hartmann (SH) wavefront sensor (WFS).

  12. THE MAXIMUM LIKELIHOOD APPROACH TO PROBABILISTIC MODELING OF AIR QUALITY DATA

    EPA Science Inventory

    Software using maximum likelihood estimation to fit six probabilistic models is discussed. The software is designed as a tool for the air pollution researcher to determine what assumptions are valid in the statistical analysis of air pollution data for the purpose of standard set...

  13. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  14. Maximum-likelihood density modification

    PubMed Central

    Terwilliger, Thomas C.

    2000-01-01

    A likelihood-based approach to density modification is developed that can be applied to a wide variety of cases where some information about the electron density at various points in the unit cell is available. The key to the approach consists of developing likelihood functions that represent the probability that a particular value of electron density is consistent with prior expectations for the electron density at that point in the unit cell. These likelihood functions are then combined with likelihood functions based on experimental observations and with others containing any prior knowledge about structure factors to form a combined likelihood function for each structure factor. A simple and general approach to maximizing the combined likelihood function is developed. It is found that this likelihood-based approach yields greater phase improvement in model and real test cases than either conventional solvent flattening and histogram matching or a recent reciprocal-space solvent-flattening procedure [Terwilliger (1999 ▶), Acta Cryst. D55, 1863–1871]. PMID:10944333

  15. C-arm perfusion imaging with a fast penalized maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    Frysch, Robert; Pfeiffer, Tim; Bannasch, Sebastian; Serowy, Steffen; Gugel, Sebastian; Skalej, Martin; Rose, Georg

    2014-03-01

    Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.

  16. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    PubMed

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end. PMID:27364333

  17. Blind deconvolution of quantum-limited incoherent imagery: maximum-likelihood approach.

    PubMed

    Holmes, T J

    1992-07-01

    Previous research presented by the author and others into maximum-likelihood image restoration for incoherent imagery is extended to consider problems of blind deconvolution in which the impulse response of the system is assumed to be unknown. Potential applications that motivate this study are wide-field and confocal fluorescence microscopy, although applications in astronomy and infrared imaging are foreseen as well. The methodology incorporates the iterative expectation-maximization algorithm. Although the precise impulse response is assumed to be unknown, some prior knowledge about characteristics of the impulse response is used. In preliminary simulation studies that are presented, the circular symmetry and the band-limited nature of the impulse response are used as such. These simulations demonstrate the potential utility and present limitations of these methods. PMID:1634965

  18. Evaluation and optimization of the maximum-likelihood approach for image reconstruction in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Jerebko, Anna K.; Mertelmeier, Thomas

    2010-04-01

    Digital Breast Tomosynthesis (DBT) suffers from incomplete data and poor quantum statistics limited by the total dose absorbed in the breast. Hence, statistical reconstruction assuming the photon statistics to follow a Poisson distribution may have some advantages. This study investigates state-of-art iterative maximum likelihood (ML) statistical reconstruction algorithms for DBT and compares the results with simple backprojection (BP), filtered backprojection (FBP), and iFBP (FBP with filter derived from iterative reconstruction). The gradient-ascent and convex optimization variants of the transmission ML algorithm are evaluated with phantom and clinical data. Convergence speed is very similar for both iterative statistical algorithms and after approximately 5 iterations all significant details are well displayed, although we notice increasing noise. We found empirically that a relaxation factor between 0.25 and 0.5 provides the optimal trade-off between noise and contrast. The ML-convex algorithm gives smoother results than the ML-gradient algorithm. The low-contrast CNR of the ML algorithms is between CNR for simple backprojection (highest) and FBP (lowest). Spatial resolution of iterative statistical and iFBP algorithms is similar to that of FBP but the quantitative density representation better resembles conventional mammograms. The iFBP algorithm provides the benefits of statistical iterative reconstruction techniques and requires much shorter computation time.

  19. Maximum Likelihood Estimation in Generalized Rasch Models.

    ERIC Educational Resources Information Center

    de Leeuw, Jan; Verhelst, Norman

    1986-01-01

    Maximum likelihood procedures are presented for a general model to unify the various models and techniques that have been proposed for item analysis. Unconditional maximum likelihood estimation, proposed by Wright and Haberman, and conditional maximum likelihood estimation, proposed by Rasch and Andersen, are shown as important special cases. (JAZ)

  20. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  1. Maximum likelihood topographic map formation.

    PubMed

    Van Hulle, Marc M

    2005-03-01

    We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004

  2. Collaborative double robust targeted maximum likelihood estimation.

    PubMed

    van der Laan, Mark J; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q(0) in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for "collaborative double robustness," demonstrating that the collaborative targeted maximum

  3. Collaborative Double Robust Targeted Maximum Likelihood Estimation*

    PubMed Central

    van der Laan, Mark J.; Gruber, Susan

    2010-01-01

    Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of the data is correctly specified, and it is semiparametric efficient if both are correctly specified. In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable. We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum

  4. TopREML: a topological restricted maximum likelihood approach to regionalize trended runoff signatures in stream networks

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-06-01

    We introduce topological restricted maximum likelihood (TopREML) as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross-validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. The ability of TopREML to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable via remote-sensing technology.

  5. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  6. A topological restricted maximum likelihood (TopREML) approach to regionalize trended runoff signatures in stream networks

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-01-01

    We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML) framework generates the best linear unbiased predictor (BLUP) of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged) and Austria (densely gauged), where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  7. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach

    PubMed Central

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations. PMID:26474313

  8. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    NASA Astrophysics Data System (ADS)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-07-01

    The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  9. Improving soil moisture profile prediction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    NASA Astrophysics Data System (ADS)

    Tran, A. P.; Vanclooster, M.; Lambot, S.

    2013-02-01

    The vertical profile of root zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR) data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach by a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Increasing update interval from 5 to 50 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.

  10. Trends in morphological evolution in homobasidiomycetes inferred using maximum likelihood: a comparison of binary and multistate approaches.

    PubMed

    Hibbett, David

    2004-12-01

    The homobasidiomycetes is a diverse group of macrofungi that includes mushrooms, puffballs, coral fungi, and other forms. This study used maximum likelihood methods to determine if there are general trends (evolutionary tendencies) in the evolution of fruiting body forms in homobasidiomycetes, and to estimate the ancestral forms of the homobasidiomycetes and euagarics clade. Character evolution was modeled using a published 481-species phylogeny under two character-coding regimes: additive binary coding, using DISCRETE, and multistate (five-state) coding, using MULTISTATE. Inferences regarding trends in character evolution made under binary coding were often in conflict with those made under multistate coding, suggesting that the additive binary coding approach cannot serve as a surrogate for multistate methods. MULTISTATE was used to develop a"minimal"model of fruiting body evolution, in which the 20 parameters that specify rates of transformations among character states were grouped into the fewest possible rate categories. The minimal model required only four rate categories, one of which is approaching zero, and suggests the following conclusions regarding trends in evolution of homobasidiomycete fruiting bodies: (1) there is an active trend favoring the evolution of pileate-stipitate forms (those with a cap and stalk); (2) the hypothesis that the evolution of gasteroid forms (those with internal spore production, such as puffballs) is irreversible cannot be rejected; and (3) crustlike resupinate forms are not a particularly labile morphology. The latter finding contradicts the conclusions of a previous study that used binary character coding. Ancestral state reconstructions under binary coding suggest that the ancestor of the homobasidiomycetes was resupinate and the ancestor of the euagarics clade was pileate-stipitate, but ancestral state reconstructions under multistate coding did not resolve the ancestral form of either node. The results of this study

  11. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  12. Model Fit after Pairwise Maximum Likelihood.

    PubMed

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  13. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  14. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach

    PubMed Central

    2013-01-01

    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/. PMID:24683370

  15. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    NASA Astrophysics Data System (ADS)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  16. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    SciTech Connect

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  17. 230Th and 234Th as coupled tracers of particle cycling in the ocean: A maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Lei; Armstrong, Robert A.; Cochran, J. Kirk; Heilbrun, Christina

    2016-05-01

    We applied maximum likelihood estimation to measurements of Th isotopes (234,230Th) in Mediterranean Sea sediment traps that separated particles according to settling velocity. This study contains two unique aspects. First, it relies on settling velocities that were measured using sediment traps, rather than on measured particle sizes and an assumed relationship between particle size and sinking velocity. Second, because of the labor and expense involved in obtaining these data, they were obtained at only a few depths, and their analysis required constructing a new type of box-like model, which we refer to as a "two-layer" model, that we then analyzed using likelihood techniques. Likelihood techniques were developed in the 1930s by statisticians, and form the computational core of both Bayesian and non-Bayesian statistics. Their use has recently become very popular in ecology, but they are relatively unknown in geochemistry. Our model was formulated by assuming steady state and first-order reaction kinetics for thorium adsorption and desorption, and for particle aggregation, disaggregation, and remineralization. We adopted a cutoff settling velocity (49 m/d) from Armstrong et al. (2009) to separate particles into fast- and slow-sinking classes. A unique set of parameters with no dependence on prior values was obtained. Adsorption rate constants for both slow- and fast-sinking particles are slightly higher in the upper layer than in the lower layer. Slow-sinking particles have higher adsorption rate constants than fast-sinking particles. Desorption rate constants are higher in the lower layer (slow-sinking particles: 13.17 ± 1.61, fast-sinking particles: 13.96 ± 0.48) than in the upper layer (slow-sinking particles: 7.87 ± 0.60 y-1, fast-sinking particles: 1.81 ± 0.44 y-1). Aggregation rate constants were higher, 1.88 ± 0.04, in the upper layer and just 0.07 ± 0.01 y-1 in the lower layer. Disaggregation rate constants were just 0.30 ± 0.10 y-1 in the upper

  18. Relevance Data for Language Models Using Maximum Likelihood.

    ERIC Educational Resources Information Center

    Bodoff, David; Wu, Bin; Wong, K. Y. Michael

    2003-01-01

    Presents a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval parameters. Discusses similarities to language models; the unification of document-oriented and query-oriented views; tests on data sets; algorithms and scalability; and the effectiveness of maximum likelihood…

  19. PsiMLE: A maximum-likelihood estimation approach to estimating psychophysical scaling and variability more reliably, efficiently, and flexibly.

    PubMed

    Odic, Darko; Im, Hee Yeon; Eisinger, Robert; Ly, Ryan; Halberda, Justin

    2016-06-01

    A simple and popular psychophysical model-usually described as overlapping Gaussian tuning curves arranged along an ordered internal scale-is capable of accurately describing both human and nonhuman behavioral performance and neural coding in magnitude estimation, production, and reproduction tasks for most psychological dimensions (e.g., time, space, number, or brightness). This model traditionally includes two parameters that determine how a physical stimulus is transformed into a psychological magnitude: (1) an exponent that describes the compression or expansion of the physical signal into the relevant psychological scale (β), and (2) an estimate of the amount of inherent variability (often called internal noise) in the Gaussian activations along the psychological scale (σ). To date, linear slopes on log-log plots have traditionally been used to estimate β, and a completely separate method of averaging coefficients of variance has been used to estimate σ. We provide a respectful, yet critical, review of these traditional methods, and offer a tutorial on a maximum-likelihood estimation (MLE) and a Bayesian estimation method for estimating both β and σ [PsiMLE(β,σ)], coupled with free software that researchers can use to implement it without a background in MLE or Bayesian statistics (R-PsiMLE). We demonstrate the validity, reliability, efficiency, and flexibility of this method through a series of simulations and behavioral experiments, and find the new method to be superior to the traditional methods in all respects. PMID:25987306

  20. Sensor registration using airlanes: maximum likelihood solution

    NASA Astrophysics Data System (ADS)

    Ong, Hwa-Tung

    2004-01-01

    In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.

  1. Sensor registration using airlanes: maximum likelihood solution

    NASA Astrophysics Data System (ADS)

    Ong, Hwa-Tung

    2003-12-01

    In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.

  2. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  3. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  4. Maximum likelihood decoding of Reed Solomon Codes

    SciTech Connect

    Sudan, M.

    1996-12-31

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.

  5. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  6. CORA: Emission Line Fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  7. CORA - emission line fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  8. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  9. Market Penetration of Competing New Technology: A Maximum Likelihood (MLE) Approach to Modeling the Emergence of the Electronic Ballasts

    SciTech Connect

    Fathelrahman, Eihab M. ); Anderson, Dave M. )

    2003-08-20

    Technology is the major driving force of productivity gains and economic growth. Historical studies throughout the last decade attributed about half of economic growth to technological change and the other half to the combined effect of all other driving forces, such as the larger and better-qualified labor force and accumulated stock of capital. V. Peterka (1977) was one of the first to break new ground exploring the mathematical methods for forecasting market shares of competing technologies. Having information about the historical market shares of competing technologies, Peterka described a unique set of algebraic derivations in the MLEST Model. Since Peterka, a large number of studies described the theoretical basis for technology diffusion, however very few provided a real world examples or verified the applicability to the market diffusion theory for energy applications. Objective: The objective of this study is to provide an example of projecting market shares of competing technologies using maximum likelihood estimation (MLE). The application applies to the emergence of the electronic ballast for fluorescent lighting applications. In this example we model the historical competition between the existing technology (magnetic ballasts) and the emerging technology (electronic ballasts). The factors surrounding electronic ballasts as replacement for the magnetic ballasts provide a rich example of competing technologies. The lessons from this example could be used to inform forecasting of many other similar technologies penetrating the market in the U.S. energy sector (e.g. fuel cells, digital information and communication technologies ICTs ...etc). The example will forecast the market shares of magnetic and electronic ballast technologies to 2020 and discuss the energy savings and other benefits. This type of modeling and analysis can help inform the rule-making process for any potential future DOE standards for electronic ballasts. Method: Kennedy, Peter (1993

  10. A maximum likelihood framework for protein design

    PubMed Central

    Kleinman, Claudia L; Rodrigue, Nicolas; Bonnard, Cécile; Philippe, Hervé; Lartillot, Nicolas

    2006-01-01

    Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces shaping protein sequences, and

  11. Targeted maximum likelihood estimation in safety analysis

    PubMed Central

    Lendle, Samuel D.; Fireman, Bruce; van der Laan, Mark J.

    2013-01-01

    Objectives To compare the performance of a targeted maximum likelihood estimator (TMLE) and a collaborative TMLE (CTMLE) to other estimators in a drug safety analysis, including a regression-based estimator, propensity score (PS)–based estimators, and an alternate doubly robust (DR) estimator in a real example and simulations. Study Design and Setting The real data set is a subset of observational data from Kaiser Permanente Northern California formatted for use in active drug safety surveillance. Both the real and simulated data sets include potential confounders, a treatment variable indicating use of one of two antidiabetic treatments and an outcome variable indicating occurrence of an acute myocardial infarction (AMI). Results In the real data example, there is no difference in AMI rates between treatments. In simulations, the double robustness property is demonstrated: DR estimators are consistent if either the initial outcome regression or PS estimator is consistent, whereas other estimators are inconsistent if the initial estimator is not consistent. In simulations with near-positivity violations, CTMLE performs well relative to other estimators by adaptively estimating the PS. Conclusion Each of the DR estimators was consistent, and TMLE and CTMLE had the smallest mean squared error in simulations. PMID:23849159

  12. Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.

    2014-08-01

    Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  13. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  14. The maximum likelihood dating of magnetostratigraphic sections

    NASA Astrophysics Data System (ADS)

    Man, Otakar

    2011-04-01

    In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.

  15. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  16. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  17. A maximum likelihood approach to generate hypotheses on the evolution and historical biogeography in the Lower Volga Valley regions (southwest Russia).

    PubMed

    Mavrodiev, Evgeny V; Laktionov, Alexy P; Cellinese, Nico

    2012-07-01

    The evolution of the diverse flora in the Lower Volga Valley (LVV) (southwest Russia) is complex due to the composite geomorphology and tectonic history of the Caspian Sea and adjacent areas. In the absence of phylogenetic studies and temporal information, we implemented a maximum likelihood (ML) approach and stochastic character mapping reconstruction aiming at recovering historical signals from species occurrence data. A taxon-area matrix of 13 floristic areas and 1018 extant species was constructed and analyzed with RAxML and Mesquite. Additionally, we simulated scenarios with numbers of hypothetical extinct taxa from an unknown palaeoflora that occupied the areas before the dramatic transgression and regression events that have occurred from the Pleistocene to the present day. The flora occurring strictly along the river valley and delta appear to be younger than that of adjacent steppes and desert-like regions, regardless of the chronology of transgression and regression events that led to the geomorphological formation of the LVV. This result is also supported when hypothetical extinct taxa are included in the analyses. The history of each species was inferred by using a stochastic character mapping reconstruction method as implemented in Mesquite. Individual histories appear to be independent from one another and have been shaped by repeated dispersal and extinction events. These reconstructions provide testable hypotheses for more in-depth investigations of their population structure and dynamics. PMID:22957179

  18. Low-complexity approximations to maximum likelihood MPSK modulation classification

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2004-01-01

    We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.

  19. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  20. Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.

    ERIC Educational Resources Information Center

    Harwell, Michael R.; And Others

    1988-01-01

    The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)

  1. Nonparametric identification and maximum likelihood estimation for hidden Markov models

    PubMed Central

    Alexandrovich, G.; Holzmann, H.; Leister, A.

    2016-01-01

    Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop a nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback–Leibler divergence of the hidden Markov model, also identifies the true parameter vector nonparametrically. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we establish the consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates and of nonparametric goodness of fit tests are investigated in a simulation study.

  2. Modified maximum likelihood registration based on information fusion

    NASA Astrophysics Data System (ADS)

    Qi, Yongqing; Jing, Zhongliang; Hu, Shiqiang

    2007-11-01

    The bias estimation of passive sensors is considered based on information fusion in multi-platform multi-sensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.

  3. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  4. Maximum-likelihood estimation of recent shared ancestry (ERSA)

    PubMed Central

    Huff, Chad D.; Witherspoon, David J.; Simonson, Tatum S.; Xing, Jinchuan; Watkins, W. Scott; Zhang, Yuhua; Tuohy, Therese M.; Neklason, Deborah W.; Burt, Randall W.; Guthery, Stephen L.; Woodward, Scott R.; Jorde, Lynn B.

    2011-01-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package. PMID:21324875

  5. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  6. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  7. Fluorescence resonance energy transfer imaging by maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yupeng; Yuan, Yumin; Holmes, Timothy J.

    2004-06-01

    Fluorescence resonance energy transfer (FRET) is a fluorescence microscope imaging process involving nonradiative energy transfer between two fluorophores (the donor and the acceptor). FRET is used to detect the chemical interactions and, in some cases, measure the distance between molecules. Existing approaches do not always well compensate for bleed-through in excitation, cross-talk in emission detection and electronic noise in image acquisition. We have developed a system to automatically search for maximum-likelihood estimates of the FRET image, donor concentration and acceptor concentration. It also produces other system parameters, such as excitation/emission filter efficiency and FRET conversion factor. The mathematical model is based upon a Poisson process since the CCD camera is a photon-counting device. The main advantage of the approach is that it automatically compensates for bleed-through and cross-talk degradations. Tests are presented with synthetic images and with real data referred to as positive and negative controls, where FRET is known to occur and to not occur, respectively. The test results verify the claimed advantages by showing consistent accuracy in detecting FRET and by showing improved accuracy in calculating FRET efficiency.

  8. Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem

    PubMed Central

    Chan, Kwun Chuen Gary; Qin, Jing

    2016-01-01

    We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657

  9. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  10. Targeted maximum likelihood based causal inference: Part I.

    PubMed

    van der Laan, Mark J

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  11. Targeted Maximum Likelihood Based Causal Inference: Part I

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions. In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is

  12. Magnitude: Yield relationship at various nuclear test sites - a maximum-likelihood approach using heavily censored explosive yields. Report for April 1989-April 1990

    SciTech Connect

    Jih, R.S.; Shumway, R.R.; Rivers, D.W.; Wagner, R.A.; McElfresh, T.W.

    1990-05-01

    Conventional methods for estimating underground explosion yields from seismic recordings are based on the use of some appropriate magnitude:yield relationship. One of the most important parameters used to characterize the seismic signature of an underground explosion is the body-wave magnitude, mb. Thus obtaining an unbiased measurement of mb (auxiliarily Ms, pcoda, mb(Lg), Mo, and RMS Lg values) is obviously a key step in estimating the yield. During the past decade, the mb which is averaged over a well-distributed global network and which incorporates the maximum-likelihood technique into the inversion scheme has become widely accepted as a means to obtain mb estimates that avoid bias due to the detection threshold characteristics of individual network stations. Recently Soviet seismologists have published descriptions of 96 nuclear explosions conducted from 1961 through 1972 at the Semipalatinsk Test Site, in Eastern Kazakhstan. With the exception of releasing news about their peaceful nuclear explosions PNE, the Soviets have never before published such a body of information. However, out of the 72 Degelen events with announced yields, only 9 events or 12.5% were of known yields. The remaining were either left censored (66.7%) or bounded (20.8%). Similar heavy-censoring pattern can be found for other test sites. Thus the development of a procedure capable of making full use of such censored information would seem very timely and necessary.

  13. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  14. Maximum likelihood estimation of shear wave speed in transient elastography.

    PubMed

    Audière, Stéphane; Angelini, Elsa D; Sandrin, Laurent; Charbit, Maurice

    2014-06-01

    Ultrasonic transient elastography (TE), enables to assess, under active mechanical constraints, the elasticity of the liver, which correlates with hepatic fibrosis stages. This technique is routinely used in clinical practice to assess noninvasively liver stiffness. The Fibroscan system used in this work generates a shear wave via an impulse stress applied on the surface of the skin and records a temporal series of radio-frequency (RF) lines using a single-element ultrasound probe. A shear wave propagation map (SWPM) is generated as a 2-D map of the displacements along depth and time, derived from the correlations of the sequential 1-D RF lines, assuming that the direction of propagation (DOP) of the shear wave coincides with the ultrasound beam axis (UBA). Under the assumption of pure elastic tissue, elasticity is proportional to the shear wave speed. This paper introduces a novel approach to the processing of the SWPM, deriving the maximum likelihood estimate of the shear wave speed when comparing the observed displacements and the estimates provided by the Green's functions. A simple parametric model is used to interface Green's theoretical values of noisy measures provided by the SWPM, taking into account depth-varying attenuation and time-delay. The proposed method was evaluated on numerical simulations using a finite element method simulator and on physical phantoms. Evaluation on this test database reported very high agreements of shear wave speed measures when DOP and UBA coincide. PMID:24835213

  15. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  16. Correcting for Sequencing Error in Maximum Likelihood Phylogeny Inference

    PubMed Central

    Kuhner, Mary K.; McGill, James

    2014-01-01

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. PMID:25378476

  17. Skewness for Maximum Likelihood Estimators of the Negative Binomial Distribution

    SciTech Connect

    Bowman, Kimiko o

    2007-01-01

    The probability generating function of one version of the negative binomial distribution being (p + 1 - pt){sup -k}, we study elements of the Hessian and in particular Fisher's discovery of a series form for the variance of k, the maximum likelihood estimator, and also for the determinant of the Hessian. There is a link with the Psi function and its derivatives. Basic algebra is excessively complicated and a Maple code implementation is an important task in the solution process. Low order maximum likelihood moments are given and also Fisher's examples relating to data associated with ticks on sheep. Efficiency of moment estimators is mentioned, including the concept of joint efficiency. In an Addendum we give an interesting formula for the difference of two Psi functions.

  18. A Targeted Maximum Likelihood Estimator for Two-Stage Designs

    PubMed Central

    Rose, Sherri; van der Laan, Mark J.

    2011-01-01

    We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator. PMID:21556285

  19. Precision of maximum likelihood estimation in adaptive designs.

    PubMed

    Graf, Alexandra Christine; Gutjahr, Georg; Brannath, Werner

    2016-03-15

    There has been increasing interest in trials that allow for design adaptations like sample size reassessment or treatment selection at an interim analysis. Ignoring the adaptive and multiplicity issues in such designs leads to an inflation of the type 1 error rate, and treatment effect estimates based on the maximum likelihood principle become biased. Whereas the methodological issues concerning hypothesis testing are well understood, it is not clear how to deal with parameter estimation in designs were adaptation rules are not fixed in advanced so that, in practice, the maximum likelihood estimate (MLE) is used. It is therefore important to understand the behavior of the MLE in such designs. The investigation of Bias and mean squared error (MSE) is complicated by the fact that the adaptation rules need not be fully specified in advance and, hence, are usually unknown. To investigate Bias and MSE under such circumstances, we search for the sample size reassessment and selection rules that lead to the maximum Bias or maximum MSE. Generally, this leads to an overestimation of Bias and MSE, which can be reduced by imposing realistic constraints on the rules like, for example, a maximum sample size. We consider designs that start with k treatment groups and a common control and where selection of a single treatment and control is performed at the interim analysis with the possibility to reassess each of the sample sizes. We consider the case of unlimited sample size reassessments as well as several realistically restricted sample size reassessment rules. PMID:26459506

  20. Maximum-likelihood registration of range images with missing data.

    PubMed

    Sharp, Gregory C; Lee, Sang W; Wehe, David K

    2008-01-01

    Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329

  1. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    NASA Technical Reports Server (NTRS)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  2. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  3. Tradeoffs in regularized maximum-likelihood image restoration

    NASA Astrophysics Data System (ADS)

    Markham, Joanne; Conchello, Jose-Angel

    1997-04-01

    All algorithms for three-dimensional deconvolution of fluorescence microscopical images have as a common goal the estimation of a specimen function (SF) that is consistent with the recorded image and the process for image formation and recording. To check for consistency, the image of the estimated SF predicted by the imaging operator is compared to the recorded image, and the similarity between them is used as a figure of merit (FOM) in the algorithm to improve the specimen function estimate. Commonly used FOMs include squared differences, maximum entropy, and maximum likelihood (ML). The imaging operator is usually characterized by the point-spread function (PSF), the image of a point source of light, or its Fourier transform, the optical transfer function (OTF). Because the OTF is non-zero only over a small region of the spatial-frequency domain, the inversion of the image formation operator is non-unique and the estimated SF is potentially artifactual. Adding a term to the FOM that penalizes some unwanted behavior of the estimated SF effectively ameliorates potential artifacts, but at the same time biases the estimation process. For example, an intensity penalty avoids overly large pixel values but biases the SF to small pixel values. A roughness penalty avoids rapid pixel to pixel variations but biases the SF to be smooth. In this article we assess the effects of the roughness and intensity penalties on maximum likelihood image estimation.

  4. Maximum likelihood analysis of bubble incidence for mixed gas diving.

    PubMed

    Tikuisis, P; Gault, K; Carrod, G

    1990-03-01

    The method of maximum likelihood has been applied to predict the incidence of bubbling in divers for both air and helium diving. Data were obtained from 108 air man-dives and 622 helium man-dives conducted experimentally in a hyperbaric chamber. Divers were monitored for bubbles using Doppler ultrasonics during the period from surfacing until approximately 2 h after surfacing. Bubble grades were recorded according to the K-M code, and the maximum value in the precordial region for each diver was used in the likelihood analysis. Prediction models were based on monoexponential gas kinetics using one and two parallel-compartment configurations. The model parameters were of three types: gas kinetics, gas potency, and compartment gain. When the potency of the gases was not distinguished, the risk criterion used was inherently based on the gas supersaturation ratio, otherwise it was based on the potential bubble volume. The two-compartment model gave a significantly better prediction than the one-compartment model only if the kinetics of nitrogen and helium were distinguished. A further significant improvement with the two-compartment model was obtained when the potency of the two gases was distinguished, thereby making the potential bubble volume criterion a better choice than the gas pressure criterion. The results suggest that when the method of maximum likelihood is applied for the prediction of the incidence of bubbling, more than one compartment should be used and if more than one is used consideration should be given to distinguishing the potencies of the inert gases. PMID:2181767

  5. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  6. Targeted maximum likelihood based causal inference: Part II.

    PubMed

    van der Laan, Mark J

    2010-01-01

    In this article, we provide a template for the practical implementation of the targeted maximum likelihood estimator for analyzing causal effects of multiple time point interventions, for which the methodology was developed and presented in Part I. In addition, the application of this template is demonstrated in two important estimation problems: estimation of the effect of individualized treatment rules based on marginal structural models for treatment rules, and the effect of a baseline treatment on survival in a randomized clinical trial in which the time till event is subject to right censoring. PMID:21731531

  7. Targeted Maximum Likelihood Based Causal Inference: Part II

    PubMed Central

    van der Laan, Mark J.

    2010-01-01

    In this article, we provide a template for the practical implementation of the targeted maximum likelihood estimator for analyzing causal effects of multiple time point interventions, for which the methodology was developed and presented in Part I. In addition, the application of this template is demonstrated in two important estimation problems: estimation of the effect of individualized treatment rules based on marginal structural models for treatment rules, and the effect of a baseline treatment on survival in a randomized clinical trial in which the time till event is subject to right censoring. PMID:21731531

  8. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  9. The Relative Performance of Targeted Maximum Likelihood Estimators

    PubMed Central

    Porter, Kristin E.; Gruber, Susan; van der Laan, Mark J.; Sekhon, Jasjeet S.

    2011-01-01

    There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges. PMID:21931570

  10. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    NASA Astrophysics Data System (ADS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-10-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.

  11. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  12. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    SciTech Connect

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  13. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    SciTech Connect

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  14. Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes

    NASA Astrophysics Data System (ADS)

    Boyer, E.; Petitdidier, M.; Larzabal, P.

    2004-11-01

    This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.

  15. Maximum likelihood estimation with poisson (counting) statistics for waste drum inspection

    SciTech Connect

    Goodman, D.

    1997-05-01

    This note provides a preliminary look at the issues involved in waste drum inspection when emission levels are so low that central limit theorem arguments do not apply and counting statistics, rather than the usual Gaussian assumption, must be considered. At very high count rates the assumption of Gaussian statistics is reasonable, and the maximum likelihood arguments that we discuss below for low count rates would lead to the usual approach of least squares fits. Least squares is not the the best technique for low counts, and we will develop the maximum likelihood estimators for the low count case.

  16. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    PubMed

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks. PMID:24500821

  17. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  18. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning.

    PubMed

    Wang, Hui; Rose, Sherri; van der Laan, Mark J

    2011-07-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  19. Finding Quantitative Trait Loci Genes with Collaborative Targeted Maximum Likelihood Learning

    PubMed Central

    Wang, Hui; Rose, Sherri; van der Laan, Mark J.

    2010-01-01

    Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach. PMID:21572586

  20. Wavelet domain watermarking using maximum-likelihood detection

    NASA Astrophysics Data System (ADS)

    Ng, Tek M.; Garg, Hari K.

    2004-06-01

    A digital watermark is an imperceptible mark placed on multimedia content for a variety of applications including copyright protection, fingerprinting, broadcast monitoring, etc. Traditionally, watermark detection algorithms are based on the correlation between the watermark and the media the watermark is embedded in. Although simple to use, correlation detection is only optimal when the watermark embedding process follows an additive rule and when the media is drawn from Gaussian distributions. More recent works on watermark detection are based on decision theory. In this paper, a maximum-likelihood (ML) detection scheme based on Bayes's decision theory is proposed for image watermarking in wavelet transform domain. The decision threshold is derived using the Neyman-Pearson criterion to minimize the missed detection probability subject to a given false alarm probability. The detection performance depends on choosing a probability distribution function (PDF) that can accurately model the distribution of the wavelet transform coefficients. The generalized Gaussian PDF is adopted here. Previously, the Gaussian PDF, which is a special case, has been considered for such detection scheme. Using extensive experimentation, the generalized Gaussian PDF is shown to be a better model.

  1. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  2. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    DOE PAGESBeta

    Agnese, R.

    2015-03-30

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less

  3. Maximum likelihood estimation for cytogenetic dose-response curves

    SciTech Connect

    Frome, E.L.; DuFrain, R.J.

    1986-03-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  4. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  5. Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2014-11-01

    This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation.

  6. The numerical evaluation of the maximum-likelihood estimate of a subset of mixture proportions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    Necessary and sufficient conditions are given for a maximum likelihood estimate of a subset of mixture proportions. From these conditions, likelihood equations are derived satisfied by the maximum-likelihood estimate and a successive-approximations procedure is discussed as suggested by equations for numerically evaluating the maximum-likelihood estimate. It is shown that, with probability one for large samples, this procedure converges locally to the maximum-likelihood estimate whenever a certain step-size lies between zero and two. Furthermore, optimal rates of local convergence are obtained for a step-size which is bounded below by a number between one and two.

  7. Maximum-likelihood methods in cryo-EM. Part II: application to experimental data

    PubMed Central

    Scheres, Sjors H.W.

    2010-01-01

    With the advent of computationally feasible approaches to maximum likelihood image processing for cryo-electron microscopy, these methods have proven particularly useful in the classification of structurally heterogeneous single-particle data. A growing number of experimental studies have applied these algorithms to study macromolecular complexes with a wide range of structural variability, including non-stoichiometric complex formation, large conformational changes and combinations of both. This chapter aims to share the practical experience that has been gained from the application of these novel approaches. Current insights on how to prepare the data and how to perform two- or three-dimensional classifications are discussed together with aspects related to high-performance computing. Thereby, this chapter will hopefully be of practical use for those microscopists wanting to apply maximum likelihood methods in their own investigations. PMID:20888966

  8. Maximum likelihood analysis for heteroscedastic one-way random effects ANOVA in interlaboratory studies.

    PubMed

    Vangel, M G; Rukhin, A L

    1999-03-01

    This article presents results for the maximum likelihood analysis of several groups of measurements made on the same quantity. Following Cochran (1937, Journal of the Royal Statistical Society 4(Supple), 102-118; 1954, Biometrics 10, 101-129; 1980, in Proceedings of the 25th Conference on the Design of Experiments in Army Research, Development and Testing, 21-33) and others, this problem is formulated as a one-way unbalanced random-effects ANOVA with unequal within-group variances. A reparametrization of the likelihood leads to simplified computations, easier identification and interpretation of multimodality of the likelihood, and (through a non-informative-prior Bayesian approach) approximate confidence regions for the mean and between-group variance. PMID:11318146

  9. An evaluation of several different classification schemes - Their parameters and performance. [maximum likelihood decision for crop identification

    NASA Technical Reports Server (NTRS)

    Scholz, D.; Fuhs, N.; Hixson, M.

    1979-01-01

    The overall objective of this study was to apply and evaluate several of the currently available classification schemes for crop identification. The approaches examined were: (1) a per point Gaussian maximum likelihood classifier, (2) a per point sum of normal densities classifier, (3) a per point linear classifier, (4) a per point Gaussian maximum likelihood decision tree classifier, and (5) a texture sensitive per field Gaussian maximum likelihood classifier. Three agricultural data sets were used in the study: areas from Fayette County, Illinois, and Pottawattamie and Shelby Counties in Iowa. The segments were located in two distinct regions of the Corn Belt to sample variability in soils, climate, and agricultural practices.

  10. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  11. Maximum-likelihood and other processors for incoherent and coherent matched-field localization.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2012-10-01

    This paper develops a series of maximum-likelihood processors for matched-field source localization given various states of information regarding the frequency and time variation of source amplitude and phase, and compares these with existing approaches to coherent processing with incomplete source knowledge. The comparison involves elucidating each processor's approach to source spectral information within a unifying formulation, which provides a conceptual framework for classifying and comparing processors and explaining their relative performance, as quantified in a numerical study. The maximum-likelihood processors represent optimal estimators given the assumption of Gaussian noise, and are based on analytically maximizing the corresponding likelihood function over explicit unknown source spectral parameters. Cases considered include knowledge of the relative variation in source amplitude over time and/or frequency (e.g., a flat spectrum), and tracking the relative phase variation over time, as well as incoherent and coherent processing. Other approaches considered include the conventional (Bartlett) processor, cross-frequency incoherent processor, pair-wise processor, and coherent normalized processor. Processor performance is quantified as the probability of correct localization from Monte Carlo appraisal over a large number of random realizations of noise, source location, and environmental parameters. Processors are compared as a function of signal-to-noise ratio, number of frequencies, and number of sensors. PMID:23039424

  12. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  13. A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.

    PubMed

    Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming

    2008-01-01

    This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system. PMID:19162641

  14. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  15. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    SciTech Connect

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  16. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  17. Digital combining-weight estimation for broadband sources using maximum-likelihood estimates

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.; Vilnrotter, V. A.

    1994-01-01

    An algorithm described for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system is compared with the maximum-likelihood estimate. This provides some improvement in performance, with an increase in computational complexity. However, the maximum-likelihood algorithm is simple enough to allow implementation on a PC-based combining system.

  18. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  19. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  20. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756

  1. Binomial and Poisson Mixtures, Maximum Likelihood, and Maple Code

    SciTech Connect

    Bowman, Kimiko o; Shenton, LR

    2006-01-01

    The bias, variance, and skewness of maximum likelihoood estimators are considered for binomial and Poisson mixture distributions. The moments considered are asymptotic, and they are assessed using the Maple code. Question of existence of solutions and Karl Pearson's study are mentioned, along with the problems of valid sample space. Large samples to reduce variances are not unusual; this also applies to the size of the asymptotic skewness.

  2. Maximum likelihood positioning and energy correction for scintillation detectors.

    PubMed

    Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-21

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the [Formula: see text] scintillator pixel array with an [Formula: see text] SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time. PMID:26836394

  3. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  4. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  5. IQ-TREE: A Fast and Effective Stochastic Algorithm for Estimating Maximum-Likelihood Phylogenies

    PubMed Central

    Nguyen, Lam-Tung; Schmidt, Heiko A.; von Haeseler, Arndt; Minh, Bui Quang

    2015-01-01

    Large phylogenomics data sets require fast tree inference methods, especially for maximum-likelihood (ML) phylogenies. Fast programs exist, but due to inherent heuristics to find optimal trees, it is not clear whether the best tree is found. Thus, there is need for additional approaches that employ different search strategies to find ML trees and that are at the same time as fast as currently available ML programs. We show that a combination of hill-climbing approaches and a stochastic perturbation method can be time-efficiently implemented. If we allow the same CPU time as RAxML and PhyML, then our software IQ-TREE found higher likelihoods between 62.2% and 87.1% of the studied alignments, thus efficiently exploring the tree-space. If we use the IQ-TREE stopping rule, RAxML and PhyML are faster in 75.7% and 47.1% of the DNA alignments and 42.2% and 100% of the protein alignments, respectively. However, the range of obtaining higher likelihoods with IQ-TREE improves to 73.3–97.1%. IQ-TREE is freely available at http://www.cibiv.at/software/iqtree. PMID:25371430

  6. Maximum likelihood estimation for model Mt,α for capture-recapture data with misidentification.

    PubMed

    Vale, R T R; Fewster, R M; Carroll, E L; Patenaude, N J

    2014-12-01

    We investigate model Mt,α  for abundance estimation in closed-population capture-recapture studies, where animals are identified from natural marks such as DNA profiles or photographs of distinctive individual features. Model Mt,α  extends the classical model Mt  to accommodate errors in identification, by specifying that each sample identification is correct with probability α and false with probability 1-α. Information about misidentification is gained from a surplus of capture histories with only one entry, which arise from false identifications. We derive an exact closed-form expression for the likelihood for model Mt,α  and show that it can be computed efficiently, in contrast to previous studies which have held the likelihood to be computationally intractable. Our fast computation enables us to conduct a thorough investigation of the statistical properties of the maximum likelihood estimates. We find that the indirect approach to error estimation places high demands on data richness, and good statistical properties in terms of precision and bias require high capture probabilities or many capture occasions. When these requirements are not met, abundance is estimated with very low precision and negative bias, and at the extreme better properties can be obtained by the naive approach of ignoring misidentification error. We recommend that model Mt,α  be used with caution and other strategies for handling misidentification error be considered. We illustrate our study with genetic and photographic surveys of the New Zealand population of southern right whale (Eubalaena australis). PMID:24942186

  7. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  8. Maximum likelihood ratio tests for comparing the discriminatory ability of biomarkers subject to limit of detection.

    PubMed

    Vexler, Albert; Liu, Aiyi; Eliseeva, Ekaterina; Schisterman, Enrique F

    2008-09-01

    In this article, we consider comparing the areas under correlated receiver operating characteristic (ROC) curves of diagnostic biomarkers whose measurements are subject to a limit of detection (LOD), a source of measurement error from instruments' sensitivity in epidemiological studies. We propose and examine the likelihood ratio tests with operating characteristics that are easily obtained by classical maximum likelihood methodology. PMID:18047527

  9. On the use of maximum likelihood estimation for the assembly of Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.; Ramakrishnan, Jayant

    1991-01-01

    Distributed parameter models of the Solar Array Flight Experiment, the Mini-MAST truss, and Space Station Freedom assembly are discussed. The distributed parameter approach takes advantage of (1) the relatively small number of model parameters associated with partial differential equation models of structural dynamics, (2) maximum-likelihood estimation using both prelaunch and on-orbit test data, (3) the inclusion of control system dynamics in the same equations, and (4) the incremental growth of the structural configurations. Maximum-likelihood parameter estimates for distributed parameter models were based on static compliance test results and frequency response measurements. Because the Space Station Freedom does not yet exist, the NASA Mini-MAST truss was used to test the procedure of modeling and parameter estimation. The resulting distributed parameter model of the Mini-MAST truss successfully demonstrated the approach taken. The computer program PDEMOD enables any configuration that can be represented by a network of flexible beam elements and rigid bodies to be remodeled.

  10. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  11. FITTING STATISTICAL DISTRIBUTIONS TO AIR QUALITY DATA BY THE MAXIMUM LIKELIHOOD METHOD

    EPA Science Inventory

    A computer program has been developed for fitting statistical distributions to air pollution data using maximum likelihood estimation. Appropriate uses of this software are discussed and a grouped data example is presented. The program fits the following continuous distributions:...

  12. L.U.St: a tool for approximated maximum likelihood supertree reconstruction

    PubMed Central

    2014-01-01

    Background Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Results Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). Conclusion This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. Availability: bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Contact: Davide.Pisani@bristol.ac.uk. PMID:24925766

  13. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  14. Fuzzy modeling, maximum likelihood estimation, and Kalman filtering for target tracking in NLOS scenarios

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Yu, Kegen; Wu, Lenan

    2014-12-01

    To mitigate the non-line-of-sight (NLOS) effect, a three-step positioning approach is proposed in this article for target tracking. The possibility of each distance measurement under line-of-sight condition is first obtained by applying the truncated triangular probability-possibility transformation associated with fuzzy modeling. Based on the calculated possibilities, the measurements are utilized to obtain intermediate position estimates using the maximum likelihood estimation (MLE), according to identified measurement condition. These intermediate position estimates are then filtered using a linear Kalman filter (KF) to produce the final target position estimates. The target motion information and statistical characteristics of the MLE results are employed in updating the KF parameters. The KF position prediction is exploited for MLE parameter initialization and distance measurement selection. Simulation results demonstrate that the proposed approach outperforms the existing algorithms in the presence of unknown NLOS propagation conditions and achieves a performance close to that when propagation conditions are perfectly known.

  15. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  16. A Targeted Maximum Likelihood Estimator of a Causal Effect on a Bounded Continuous Outcome

    PubMed Central

    Gruber, Susan; van der Laan, Mark J.

    2010-01-01

    Targeted maximum likelihood estimation of a parameter of a data generating distribution, known to be an element of a semi-parametric model, involves constructing a parametric model through an initial density estimator with parameter ɛ representing an amount of fluctuation of the initial density estimator, where the score of this fluctuation model at ɛ = 0 equals the efficient influence curve/canonical gradient. The latter constraint can be satisfied by many parametric fluctuation models since it represents only a local constraint of its behavior at zero fluctuation. However, it is very important that the fluctuations stay within the semi-parametric model for the observed data distribution, even if the parameter can be defined on fluctuations that fall outside the assumed observed data model. In particular, in the context of sparse data, by which we mean situations where the Fisher information is low, a violation of this property can heavily affect the performance of the estimator. This paper presents a fluctuation approach that guarantees the fluctuated density estimator remains inside the bounds of the data model. We demonstrate this in the context of estimation of a causal effect of a binary treatment on a continuous outcome that is bounded. It results in a targeted maximum likelihood estimator that inherently respects known bounds, and consequently is more robust in sparse data situations than the targeted MLE using a naive fluctuation model. When an estimation procedure incorporates weights, observations having large weights relative to the rest heavily influence the point estimate and inflate the variance. Truncating these weights is a common approach to reducing the variance, but it can also introduce bias into the estimate. We present an alternative targeted maximum likelihood estimation (TMLE) approach that dampens the effect of these heavily weighted observations. As a substitution estimator, TMLE respects the global constraints of the observed data

  17. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  19. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  20. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  1. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  2. Parallel computation of a maximum-likelihood estimator of a physical map.

    PubMed Central

    Bhandarkar, S M; Machaka, S A; Shete, S S; Kota, R N

    2001-01-01

    Reconstructing a physical map of a chromosome from a genomic library presents a central computational problem in genetics. Physical map reconstruction in the presence of errors is a problem of high computational complexity that provides the motivation for parallel computing. Parallelization strategies for a maximum-likelihood estimation-based approach to physical map reconstruction are presented. The estimation procedure entails a gradient descent search for determining the optimal spacings between probes for a given probe ordering. The optimal probe ordering is determined using a stochastic optimization algorithm such as simulated annealing or microcanonical annealing. A two-level parallelization strategy is proposed wherein the gradient descent search is parallelized at the lower level and the stochastic optimization algorithm is simultaneously parallelized at the higher level. Implementation and experimental results on a distributed-memory multiprocessor cluster running the parallel virtual machine (PVM) environment are presented using simulated and real hybridization data. PMID:11238392

  3. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGESBeta

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; Simonson, Katherine Mary

    2016-01-11

    In past research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  4. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  5. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  6. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  7. Estimation of bias errors in measured airplane responses using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  8. A maximum likelihood method for determining the distribution of galaxies in clusters

    NASA Astrophysics Data System (ADS)

    Sarazin, C. L.

    1980-02-01

    A maximum likelihood method is proposed for the analysis of the projected distribution of galaxies in clusters. It has many advantages compared to the standard method; principally, it does not require binning of the galaxy positions, applies to asymmetric clusters, and can simultaneously determine all cluster parameters. A rapid method of solving the maximum likelihood equations is given which also automatically gives error estimates for the parameters. Monte Carlo tests indicate this method applies even for rather sparse clusters. The Godwin-Peach data on the Coma cluster are analyzed; the core sizes derived agree reasonably with those of Bahcall. Some slight evidence of mass segregation is found.

  9. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  10. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    SciTech Connect

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  11. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  12. Necessary conditions for a maximum likelihood estimate to become asymptotically unbiased and attain the Cramer-Rao lower bound. Part I. General approach with an application to time-delay and Doppler shift estimation.

    PubMed

    Naftali, E; Makris, N C

    2001-10-01

    Analytic expressions for the first order bias and second order covariance of a general maximum likelihood estimate (MLE) are presented. These expressions are used to determine general analytic conditions on sample size, or signal-to-noise ratio (SNR), that are necessary for a MLE to become asymptotically unbiased and attain minimum variance as expressed by the Cramer-Rao lower bound (CRLB). The expressions are then evaluated for multivariate Gaussian data. The results can be used to determine asymptotic biases. variances, and conditions for estimator optimality in a wide range of inverse problems encountered in ocean acoustics and many other disciplines. The results are then applied to rigorously determine conditions on SNR necessary for the MLE to become unbiased and attain minimum variance in the classical active sonar and radar time-delay and Doppler-shift estimation problems. The time-delay MLE is the time lag at the peak value of a matched filter output. It is shown that the matched filter estimate attains the CRLB for the signal's position when the SNR is much larger than the kurtosis of the expected signal's energy spectrum. The Doppler-shift MLE exhibits dual behavior for narrow band analytic signals. In a companion paper, the general theory presented here is applied to the problem of estimating the range and depth of an acoustic source submerged in an ocean waveguide. PMID:11681372

  13. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  14. Maximum-likelihood density modification using pattern recognition of structural motifs

    SciTech Connect

    Terwilliger, Thomas C.

    2001-12-01

    A likelihood-based density-modification method is extended to include pattern recognition of structural motifs. The likelihood-based approach to density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] is extended to include the recognition of patterns of electron density. Once a region of electron density in a map is recognized as corresponding to a known structural element, the likelihood of the map is reformulated to include a term that reflects how closely the map agrees with the expected density for that structural element. This likelihood is combined with other aspects of the likelihood of the map, including the presence of a flat solvent region and the electron-density distribution in the protein region. This likelihood-based pattern-recognition approach was tested using the recognition of helical segments in a largely helical protein. The pattern-recognition method yields a substantial phase improvement over both conventional and likelihood-based solvent-flattening and histogram-matching methods. The method can potentially be used to recognize any common structural motif and incorporate prior knowledge about that motif into density modification.

  15. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  16. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  17. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  18. 12-mode OFDM transmission using reduced-complexity maximum likelihood detection.

    PubMed

    Lobato, Adriana; Chen, Yingkan; Jung, Yongmin; Chen, Haoshuo; Inan, Beril; Kuschnerov, Maxim; Fontaine, Nicolas K; Ryf, Roland; Spinnler, Bernhard; Lankl, Berthold

    2015-02-01

    We report the transmission of 163-Gb/s MDM-QPSK-OFDM and 245-Gb/s MDM-8QAM-OFDM transmission over 74 km of few-mode fiber supporting 12 spatial and polarization modes. A low-complexity maximum likelihood detector is employed to enhance the performance of a system impaired by mode-dependent loss. PMID:25680039

  19. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  20. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  1. Estimation of Maximum Likelihood of the Unextendable Dead Time Period in a Flow of Physical Events

    NASA Astrophysics Data System (ADS)

    Gortsev, A. M.; Solov'ev, A. A.

    2016-03-01

    A flow of physical events (photons, electrons, etc.) is studied. One of the mathematical models of such flows is the MAP-flow of events. The flow circulates under conditions of the unextendable dead time period, when the dead time period is unknown. The dead time period is estimated by the method of maximum likelihood from observations of arrival instants of events.

  2. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  3. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  4. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  5. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  6. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  7. A multinomial maximum likelihood program /MUNOML/. [in modeling sensory and decision phenomena

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    A multinomial maximum likelihood program (MUNOML) for signal detection and for behavior models is discussed. It is found to be useful in day to day operation since it provides maximum flexibility with minimum duplicated effort. It has excellent convergence qualities and rarely goes beyond 10 iterations. A library of subroutines is being collected for use with MUNOML, including subroutines for a successive categories model and for signal detectability models.

  8. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods.

    PubMed

    Benevides, Leandro de Jesus; Carvalho, Daniel Santana de; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-07-14

    Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27419397

  9. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  10. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases. PMID:26289495

  11. The Benefits of Maximum Likelihood Estimators in Predicting Bulk Permeability and Upscaling Fracture Networks

    NASA Astrophysics Data System (ADS)

    Emanuele Rizzo, Roberto; Healy, David; De Siena, Luca

    2016-04-01

    The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in fractured rock, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (lengths, apertures, orientations and densities) is fundamental to the estimation of permeability and fluid flow, which are of primary importance in a number of contexts including: hydrocarbon production from fractured reservoirs; geothermal energy extraction; and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. Our work links outcrop fracture data to modelled fracture networks in order to numerically predict bulk permeability. We collected outcrop data from a highly fractured upper Miocene biosiliceous mudstone formation, cropping out along the coastline north of Santa Cruz (California, USA). Using outcrop fracture networks as analogues for subsurface fracture systems has several advantages, because key fracture attributes such as spatial arrangements and lengths can be effectively measured only on outcrops [1]. However, a limitation when dealing with outcrop data is the relative sparseness of natural data due to the intrinsic finite size of the outcrops. We make use of a statistical approach for the overall workflow, starting from data collection with the Circular Windows Method [2]. Then we analyse the data statistically using Maximum Likelihood Estimators, which provide greater accuracy compared to the more commonly used Least Squares linear regression when investigating distribution of fracture attributes. Finally, we estimate the bulk permeability of the fractured rock mass using Oda's tensorial approach [3]. The higher quality of this statistical analysis is fundamental: better statistics of the fracture attributes means more accurate permeability estimation, since the fracture attributes feed

  12. Maximum-likelihood density modification using pattern recognition of structural motifs

    PubMed Central

    Terwilliger, Thomas C.

    2001-01-01

    The likelihood-based approach to density modification [Terwilliger (2000 ▶), Acta Cryst. D56, 965–972] is extended to include the recognition of patterns of electron density. Once a region of electron density in a map is recognized as corresponding to a known structural element, the likelihood of the map is reformulated to include a term that reflects how closely the map agrees with the expected density for that structural element. This likelihood is combined with other aspects of the likelihood of the map, including the presence of a flat solvent region and the electron-density distribution in the protein region. This likelihood-based pattern-recognition approach was tested using the recognition of helical segments in a largely helical protein. The pattern-recognition method yields a substantial phase improvement over both conventional and likelihood-based solvent-flattening and histogram-matching methods. The method can potentially be used to recognize any common structural motif and incorporate prior knowledge about that motif into density modification. PMID:11717487

  13. Integrating functional genomics data using maximum likelihood based simultaneous component analysis

    PubMed Central

    van den Berg, Robert A; Van Mechelen, Iven; Wilderjans, Tom F; Van Deun, Katrijn; Kiers, Henk AL; Smilde, Age K

    2009-01-01

    data blocks could benefit from its ability to take different noise levels per data block into consideration and improve the recovery of the true patterns underlying the data. Moreover, the maximum likelihood based approach underlying MxLSCA-P could be extended to custom-made solutions to specific problems encountered. PMID:19835617

  14. Intra-Die Spatial Correlation Extraction with Maximum Likelihood Estimation Method for Multiple Test Chips

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei

    In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.

  15. Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images.

    PubMed

    Aysal, Tuncer C; Barner, Kenneth E

    2007-05-01

    Speckle is a multiplicative noise that degrades ultrasound images. Recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need for more robust despeckling techniques, for both routine clinical practice and teleconsultation. Methods previously proposed for speckle reduction suffer from two major limitations: 1) noise attenuation is not sufficient, especially in the smooth and background areas; 2) existing methods do not sufficiently preserve or enhance edges--they only inhibit smoothing near edges. In this paper, we propose a novel technique that is capable of reducing the speckle more effectively than previous methods and jointly enhancing the edge information, rather than just inhibiting smoothing. The proposed method utilizes the Rayleigh distribution to model the speckle and adopts the robust maximum-likelihood estimation approach. The resulting estimator is statistically analyzed through first and second moment derivations. A tuning parameter that naturally evolves in the estimation equation is analyzed, and an adaptive method utilizing the instantaneous coefficient of variation is proposed to adjust this parameter. To further tailor performance, a weighted version of the proposed estimator is introduced to exploit varying statistics of input samples. Finally, the proposed method is evaluated and compared to well-accepted methods through simulations utilizing synthetic and real ultrasound data. PMID:17518065

  16. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    PubMed Central

    Stepanyuk, Andrey; Borisyuk, Anya; Belan, Pavel

    2014-01-01

    Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter, and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment. PMID:25324721

  17. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  18. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  19. Addressing Item-Level Missing Data: A Comparison of Proration and Full Information Maximum Likelihood Estimation.

    PubMed

    Mazza, Gina L; Enders, Craig K; Ruehlman, Linda S

    2015-01-01

    Often when participants have missing scores on one or more of the items comprising a scale, researchers compute prorated scale scores by averaging the available items. Methodologists have cautioned that proration may make strict assumptions about the mean and covariance structures of the items comprising the scale (Schafer & Graham, 2002 ; Graham, 2009 ; Enders, 2010 ). We investigated proration empirically and found that it resulted in bias even under a missing completely at random (MCAR) mechanism. To encourage researchers to forgo proration, we describe a full information maximum likelihood (FIML) approach to item-level missing data handling that mitigates the loss in power due to missing scale scores and utilizes the available item-level data without altering the substantive analysis. Specifically, we propose treating the scale score as missing whenever one or more of the items are missing and incorporating items as auxiliary variables. Our simulations suggest that item-level missing data handling drastically increases power relative to scale-level missing data handling. These results have important practical implications, especially when recruiting more participants is prohibitively difficult or expensive. Finally, we illustrate the proposed method with data from an online chronic pain management program. PMID:26610249

  20. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  1. A statistical technique for processing radio interferometer data. [using maximum likelihood algorithm

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. D.

    1975-01-01

    The output of a radio interferometer is the Fourier transform of the object under investigation. Due to the limited coverage of the Fourier plane, the reconstruction of the image of the source is blurred by the beam of the synthesized array. A maximum-likelihood processing technique is described which uses the statistical properties of the received noise-like signals. This technique has been used extensively in the processing of large-aperture seismic arrays. This inversion method results in a synthesized beam that is more uniform, has lower sidelobes, and higher resolution than the normal Fourier transform methods. The maximum-likelihood method algorithm was applied successfully to very long baseline and short baseline interferometric data.

  2. Introducing robustness to maximum-likelihood refinement of electron-microsopy data

    SciTech Connect

    Scheres, Sjors H. W. Carazo, José-María

    2009-07-01

    An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy data is presented that is based on finite mixtures of multivariate t-distributions. Compared with the conventionally employed Gaussian mixture model, the t-distribution provides robustness against outliers in the data. An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy images is presented that is based on fitting mixtures of multivariate t-distributions. The novel algorithm has intrinsic characteristics for providing robustness against atypical observations in the data, which is illustrated using an experimental test set with artificially generated outliers. Tests on experimental data revealed only minor differences in two-dimensional classifications, while three-dimensional classification with the new algorithm gave stronger elongation factor G density in the corresponding class of a structurally heterogeneous ribosome data set than the conventional algorithm for Gaussian mixtures.

  3. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks.

    PubMed

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  4. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  5. A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations

    PubMed Central

    Anderson, Amy D.; Weir, Bruce S.

    2007-01-01

    A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212

  6. Maximum-Likelihood Estimator of Clock Offset between Nanomachines in Bionanosensor Networks

    PubMed Central

    Lin, Lin; Yang, Chengfeng; Ma, Maode

    2015-01-01

    Recent advances in nanotechnology, electronic technology and biology have enabled the development of bio-inspired nanoscale sensors. The cooperation among the bionanosensors in a network is envisioned to perform complex tasks. Clock synchronization is essential to establish diffusion-based distributed cooperation in the bionanosensor networks. This paper proposes a maximum-likelihood estimator of the clock offset for the clock synchronization among molecular bionanosensors. The unique properties of diffusion-based molecular communication are described. Based on the inverse Gaussian distribution of the molecular propagation delay, a two-way message exchange mechanism for clock synchronization is proposed. The maximum-likelihood estimator of the clock offset is derived. The convergence and the bias of the estimator are analyzed. The simulation results show that the proposed estimator is effective for the offset compensation required for clock synchronization. This work paves the way for the cooperation of nanomachines in diffusion-based bionanosensor networks. PMID:26690173

  7. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  8. Maximum likelihood reconstruction in fully 3D PET via the SAGE algorithm

    SciTech Connect

    Ollinger, J.M.; Goggin, A.S.

    1996-12-31

    The SAGE and ordered subsets algorithms have been proposed as fast methods to compute penalized maximum likelihood estimates in PET. We have implemented both for use in fully 3D PET and completed a preliminary evaluation. The technique used to compute the transition matrix is fully described. The evaluation suggests that the ordered subsets algorithm converges much faster than SAGE, but that it stops short of the optimal solution.

  9. Determination of linear displacement by envelope detection with maximum likelihood estimation

    SciTech Connect

    Lang, Kuo-Chen; Teng, Hui-Kang

    2010-09-20

    We demonstrate in this report an envelope detection technique with maximum likelihood estimation in a least square sense for determining displacement. This technique is achieved by sampling the amplitudes of quadrature signals resulted from a heterodyne interferometer so that the resolution of displacement measurement of the order of {lambda}/10{sup 4} is experimentally verified. A phase unwrapping procedure is also described and experimentally demonstrated and indicates that the unambiguity range of displacement can be measured beyond a single wavelength.

  10. PHYML Online—a web server for fast maximum likelihood-based phylogenetic inference

    PubMed Central

    Guindon, Stéphane; Lethiec, Franck; Duroux, Patrice; Gascuel, Olivier

    2005-01-01

    PHYML Online is a web interface to PHYML, a software that implements a fast and accurate heuristic for estimating maximum likelihood phylogenies from DNA and protein sequences. This tool provides the user with a number of options, e.g. nonparametric bootstrap and estimation of various evolutionary parameters, in order to perform comprehensive phylogenetic analyses on large datasets in reasonable computing time. The server and its documentation are available at . PMID:15980534

  11. Maximum Likelihood-Based Iterated Divided Difference Filter for Nonlinear Systems from Discrete Noisy Measurements

    PubMed Central

    Wang, Changyuan; Zhang, Jing; Mu, Jing

    2012-01-01

    A new filter named the maximum likelihood-based iterated divided difference filter (MLIDDF) is developed to improve the low state estimation accuracy of nonlinear state estimation due to large initial estimation errors and nonlinearity of measurement equations. The MLIDDF algorithm is derivative-free and implemented only by calculating the functional evaluations. The MLIDDF algorithm involves the use of the iteration measurement update and the current measurement, and the iteration termination criterion based on maximum likelihood is introduced in the measurement update step, so the MLIDDF is guaranteed to produce a sequence estimate that moves up the maximum likelihood surface. In a simulation, its performance is compared against that of the unscented Kalman filter (UKF), divided difference filter (DDF), iterated unscented Kalman filter (IUKF) and iterated divided difference filter (IDDF) both using a traditional iteration strategy. Simulation results demonstrate that the accumulated mean-square root error for the MLIDDF algorithm in position is reduced by 63% compared to that of UKF and DDF algorithms, and by 7% compared to that of IUKF and IDDF algorithms. The new algorithm thus has better state estimation accuracy and a fast convergence rate. PMID:23012525

  12. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  13. A calibration method of self-referencing interferometry based on maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei; Guo, Guangrao

    2015-05-01

    Self-referencing interferometry has been widely used in wavefront sensing. However, currently the results of wavefront measurement include two parts, one is the real phase information of wavefront under test and the other is the system error in self-referencing interferometer. In this paper, a method based on maximum likelihood estimation is presented to calibrate the system error in self-referencing interferometer. Firstly, at least three phase difference distributions are obtained by three position measurements of the tested component: one basic position, one rotation and one lateral translation. Then, combining the three phase difference data and using the maximum likelihood method to create a maximum likelihood function, reconstructing the wavefront under test and the system errors by least square estimation and Zernike polynomials. The simulation results show that the proposed method can deal with the issue of calibration of a self-referencing interferometer. The method can be used to reduce the effect of system errors on extracting and reconstructing the wavefront under test, and improve the measurement accuracy of the self-referencing interferometer.

  14. Uniform Accuracy of the Maximum Likelihood Estimates for Probabilistic Models of Biological Sequences

    PubMed Central

    Ekisheva, Svetlana

    2010-01-01

    Probabilistic models for biological sequences (DNA and proteins) have many useful applications in bioinformatics. Normally, the values of parameters of these models have to be estimated from empirical data. However, even for the most common estimates, the maximum likelihood (ML) estimates, properties have not been completely explored. Here we assess the uniform accuracy of the ML estimates for models of several types: the independence model, the Markov chain and the hidden Markov model (HMM). Particularly, we derive rates of decay of the maximum estimation error by employing the measure concentration as well as the Gaussian approximation, and compare these rates. PMID:21318122

  15. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific

  16. A Survey of the Likelihood Approach to Bioequivalence Trials

    PubMed Central

    Choi, Leena; Caffo, Brian; Rohde, Charles

    2009-01-01

    SUMMARY Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is “equivalent” to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence. We discuss how the likelihood approach is useful to present the evidence for both average and population bioequivalence within a unified framework. We also examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods offer a viable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. PMID:18618422

  17. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives. PMID:25487423

  18. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  19. An inconsistency in the standard maximum likelihood estimation of bulk flows

    SciTech Connect

    Nusser, Adi

    2014-11-01

    Maximum likelihood estimation of the bulk flow from radial peculiar motions of galaxies generally assumes a constant velocity field inside the survey volume. This assumption is inconsistent with the definition of bulk flow as the average of the peculiar velocity field over the relevant volume. This follows from a straightforward mathematical relation between the bulk flow of a sphere and the velocity potential on its surface. This inconsistency also exists for ideal data with exact radial velocities and full spatial coverage. Based on the same relation, we propose a simple modification to correct for this inconsistency.

  20. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  1. Optimized sparse presentation-based classification method with weighted block and maximum likelihood model

    NASA Astrophysics Data System (ADS)

    He, Jun; Zuo, Tian; Sun, Bo; Wu, Xuewen; Chen, Chao

    2014-06-01

    This paper is aiming at applying sparse representation based classification (SRC) on face recognition with disguise or illumination variation. Having analyzed the characteristics of general object recognition and the principle of the classifier of SRC method, authors focus on evaluating blocks of a probe sample and propose an optimized SRC method based on position-preserving weighted block and maximum likelihood model. Principle and implementation of the proposed method have been introduced in the article, and experiments on Yale and AR face database have been given too. From experimental results, it can be seen that the proposed optimized SRC method works well than existing methods.

  2. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  3. Phantom study of tear film dynamics with optical coherence tomography and maximum-likelihood estimation

    PubMed Central

    Huang, Jinxin; Lee, Kye-sung; Clarkson, Eric; Kupinski, Matthew; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.

    2016-01-01

    In this Letter, we implement a maximum-likelihood estimator to interpret optical coherence tomography (OCT) data for the first time, based on Fourier-domain OCT and a two-interface tear film model. We use the root mean square error as a figure of merit to quantify the system performance of estimating the tear film thickness. With the methodology of task-based assessment, we study the trade-off between system imaging speed (temporal resolution of the dynamics) and the precision of the estimation. Finally, the estimator is validated with a digital tear-film dynamics phantom. PMID:23938923

  4. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  5. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  6. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  7. F-8C adaptive flight control extensions. [for maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Stein, G.; Hartmann, G. L.

    1977-01-01

    An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.

  8. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information. PMID:16350919

  9. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  10. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  11. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here. PMID:20555971

  12. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed. PMID:24182157

  13. Maximum likelihood estimation of missing data applied to flow reconstruction around NACA profiles

    NASA Astrophysics Data System (ADS)

    Leroux, R.; Chatellier, L.; David, L.

    2015-10-01

    In this paper, we investigate the maximum likelihood estimation for missing data in fluid flows series. The maximum likelihood estimation is provided with the expectation-maximization (EM) algorithm applied to the linear and quadratic proper orthogonal decomposition POD-Galerkin reduced-order models (ROMs) for various sub-samplings of large data sets. The flows around a NACA0012 profile at Reynolds numbers of 103 and angle of incidence of 20^\\circ and a NACA0015 profile at Reynolds numbers of 105 and angle of incidence of 30^\\circ are first investigated using time-resolved particle image velocimetry measurements and sub-sampled according to different ratios of missing data. The EM algorithm is then applied to the POD ROMs constructed from the sub-sampled data sets. The results show that, depending on the sub-sampling used, the EM algorithm is robust with respect to the Reynolds number and can reproduce the velocity fields and the main structures of the missing flow fields for 50% and 75% of missing data.

  14. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    NASA Astrophysics Data System (ADS)

    Washeleski, Robert L.; Meyer, Edmond J.; King, Lyon B.

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  15. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. PMID:27176912

  16. New method to compute Rcomplete enables maximum likelihood refinement for small datasets

    PubMed Central

    Luebben, Jens; Gruene, Tim

    2015-01-01

    The crystallographic reliability index Rcomplete is based on a method proposed more than two decades ago. Because its calculation is computationally expensive its use did not spread into the crystallographic community in favor of the cross-validation method known as Rfree. The importance of Rfree has grown beyond a pure validation tool. However, its application requires a sufficiently large dataset. In this work we assess the reliability of Rcomplete and we compare it with k-fold cross-validation, bootstrapping, and jackknifing. As opposed to proper cross-validation as realized with Rfree, Rcomplete relies on a method of reducing bias from the structural model. We compare two different methods reducing model bias and question the widely spread notion that random parameter shifts are required for this purpose. We show that Rcomplete has as little statistical bias as Rfree with the benefit of a much smaller variance. Because the calculation of Rcomplete is based on the entire dataset instead of a small subset, it allows the estimation of maximum likelihood parameters even for small datasets. Rcomplete enables maximum likelihood-based refinement to be extended to virtually all areas of crystallographic structure determination including high-pressure studies, neutron diffraction studies, and datasets from free electron lasers. PMID:26150515

  17. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  18. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  19. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    SciTech Connect

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  20. Inertial Sensor Arrays, Maximum Likelihood, and Cramér–Rao Bound

    NASA Astrophysics Data System (ADS)

    Skog, Isaac; Nilsson, John-Olof; Handel, Peter; Nehorai, Arye

    2016-08-01

    A maximum likelihood estimator for fusing the measurements in an inertial sensor array is presented. The maximum likelihood estimator is concentrated and an iterative solution method is presented for the resulting low-dimensional optimization problem. The Cram\\'er-Rao bound for the corresponding measurement fusion problem is derived and used to assess the performance of the proposed method, as well as to analyze how the geometry of the array and sensor errors affect the accuracy of the measurement fusion. The angular velocity information gained from the accelerometers in the array is shown to be proportional to the square of the array dimension and to the square of the angular speed. In our simulations the proposed fusion method attains the Cram\\'er-Rao bound and outperforms the current state-of-the-art method for measurement fusion in accelerometer arrays. Further, in contrast to the state-of-the-art method that requires a 3D array to work, the proposed method also works for 2D arrays. The theoretical findings are compared to results from real-world experiments with an in-house developed array that consists of 192 sensing elements.

  1. Automated Maximum Likelihood Separation of Signal from Baseline in Noisy Quantal Data

    PubMed Central

    Bruno, William J.; Ullah, Ghanim; Daniel Mak, Don-On; Pearson, John E.

    2013-01-01

    Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system’s behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225

  2. Automated maximum likelihood separation of signal from baseline in noisy quantal data.

    PubMed

    Bruno, William J; Ullah, Ghanim; Mak, Don-On Daniel; Pearson, John E

    2013-07-01

    Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system's behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225

  3. A dual formulation of a penalized maximum likelihood x-ray CT reconstruction problem

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Taguchi, Katsuyuki; Gullberg, Grant T.; Tsui, Benjamin M. W.

    2009-02-01

    This work studies the dual formulation of a penalized maximum likelihood reconstruction problem in x-ray CT. The primal objective function is a Poisson log-likelihood combined with a weighted cross-entropy penalty term. The dual formulation of the primal optimization problem is then derived and the optimization procedure outlined. The dual formulation better exploits the structure of the problem, which translates to faster convergence of iterative reconstruction algorithms. A gradient descent algorithm is implemented for solving the dual problem and its performance is compared with the filtered back-projection algorithm, and with the primal formulation optimized by using surrogate functions. The 3D XCAT phantom and an analytical x-ray CT simulator are used to generate noise-free and noisy CT projection data set with monochromatic and polychromatic x-ray spectrums. The reconstructed images from the dual formulation delineate the internal structures at early iterations better than the primal formulation using surrogate functions. However the body contour is slower to converge in the dual than in the primal formulation. The dual formulation demonstrate better noise-resolution tradeoff near the internal organs than the primal formulation. Since the surrogate functions in general can provide a diagonal approximation of the Hessian matrix of the objective function, further convergence speed up may be achieved by deriving the surrogate function of the dual objective function.

  4. Accuracy of Maximum Likelihood Parameter Estimators for Heston Stochastic Volatility SDE

    NASA Astrophysics Data System (ADS)

    Azencott, Robert; Gadhyan, Yutheeka

    2015-04-01

    We study approximate maximum likelihood estimators (MLEs) for the parameters of the widely used Heston Stock price and volatility stochastic differential equations (SDEs). We compute explicit closed form estimators maximizing the discretized log-likelihood of observations recorded at times . We compute the asymptotic biases of these parameter estimators for fixed and , as well as the rate at which these biases vanish when . We determine asymptotically consistent explicit modifications of these MLEs. For the Heston volatility SDE, we identify a canonical form determined by two canonical parameters and which are explicit functions of the original SDE parameters. We analyze theoretically the asymptotic distribution of the MLEs and of their consistent modifications, and we outline their concrete speeds of convergence by numerical simulations. We clarify in terms of the precise dichotomy between asymptotic normality and attraction by stable like distributions with heavy tails. We illustrate numerical model fitting for Heston SDEs by two concrete examples, one for daily data and one for intraday data, both with moderate values of.

  5. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  6. On maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions.

    PubMed

    Hornik, Kurt; Grün, Bettina

    2014-01-01

    Maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions involves inverting the ratio [Formula: see text] of modified Bessel functions and computational methods are required to invert these functions using approximative or iterative algorithms. In this paper we use Amos-type bounds for [Formula: see text] to deduce sharper bounds for the inverse function, determine the approximation error of these bounds, and use these to propose a new approximation for which the error tends to zero when the inverse of [Formula: see text] is evaluated at values tending to [Formula: see text] (from the left). We show that previously introduced rational bounds for [Formula: see text] which are invertible using quadratic equations cannot be used to improve these bounds. PMID:25309045

  7. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters.

    PubMed

    Li, Xinya; Deng, Z Daniel; Sun, Yannan; Martinez, Jayson J; Fu, Tao; McMichael, Geoffrey A; Carlson, Thomas J

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  8. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  9. Determination of instrumentation errors from measured data using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Klein, V.

    1980-01-01

    The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.

  10. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, T.A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored-data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet-Crame??r-Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real-time water quality monitoring.

  11. An extended-source spatial acquisition process based on maximum likelihood criterion for planetary optical communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee

    1992-01-01

    This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.

  12. Application of maximum likelihood reconstruction of subaperture data for measurement of large flat mirrors

    SciTech Connect

    Su Peng; Burge, James H.; Parks, Robert E.

    2010-01-01

    Interferometers accurately measure the difference between two wavefronts, one from a reference surface and the other from an unknown surface. If the reference surface is near perfect or is accurately known from some other test, then the shape of the unknown surface can be determined. We investigate the case where neither the reference surface nor the surface under test is well known. By making multiple shear measurements where both surfaces are translated and/or rotated, we obtain sufficient information to reconstruct the figure of both surfaces with a maximum likelihood reconstruction method. The method is demonstrated for the measurement of a 1.6 m flat mirror to 2 nm rms, using a smaller reference mirror that had significant figure error.

  13. Maximum likelihood estimation and the multivariate Bernoulli distribution: An application to reliability

    SciTech Connect

    Kvam, P.H.

    1994-08-01

    We investigate systems designed using redundant component configurations. If external events exist in the working environment that cause two or more components in the system to fail within the same demand period, the designed redundancy in the system can be quickly nullified. In the engineering field, such events are called common cause failures (CCFs), and are primary factors in some risk assessments. If CCFs have positive probability, but are not addressed in the analysis, the assessment may contain a gross overestimation of the system reliability. We apply a discrete, multivariate shock model for a parallel system of two or more components, allowing for positive probability that such external events can occur. The methods derived are motivated by attribute data for emergency diesel generators from various US nuclear power plants. Closed form solutions for maximum likelihood estimators exist in many cases; statistical tests and confidence intervals are discussed for the different test environments considered.

  14. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  15. W-IQ-TREE: a fast online phylogenetic tool for maximum likelihood analysis.

    PubMed

    Trifinopoulos, Jana; Nguyen, Lam-Tung; von Haeseler, Arndt; Minh, Bui Quang

    2016-07-01

    This article presents W-IQ-TREE, an intuitive and user-friendly web interface and server for IQ-TREE, an efficient phylogenetic software for maximum likelihood analysis. W-IQ-TREE supports multiple sequence types (DNA, protein, codon, binary and morphology) in common alignment formats and a wide range of evolutionary models including mixture and partition models. W-IQ-TREE performs fast model selection, partition scheme finding, efficient tree reconstruction, ultrafast bootstrapping, branch tests, and tree topology tests. All computations are conducted on a dedicated computer cluster and the users receive the results via URL or email. W-IQ-TREE is available at http://iqtree.cibiv.univie.ac.at It is free and open to all users and there is no login requirement. PMID:27084950

  16. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    DOE PAGESBeta

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; et al

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developedmore » using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.« less

  17. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    SciTech Connect

    Li, Xinya; Deng, Z. Daniel; USA, Richland Washington; Sun, Yannan; USA, Richland Washington; Martinez, Jayson J.; USA, Richland Washington; Fu, Tao; USA, Richland Washington; McMichael, Geoffrey A.; USA, Richland Washington; Carlson, Thomas J.; USA, Richland Washington

    2014-11-27

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  18. Parsimonious estimation of sex-specific map distances by stepwise maximum likelihood regression

    SciTech Connect

    Fann, C.S.J.; Ott, J.

    1995-10-10

    In human genetic maps, differences between female (x{sub f}) and male (x{sub m}) map distances may be characterized by the ratio, R = x{sub f}/x{sub m}, or the relative difference, Q = (x{sub f} - x{sub m})/(x{sub f} + x{sub m}) = (R - 1)/(R + 1). For a map of genetic markers spread along a chromosome, Q(d) may be viewed as a graph of Q versus the midpoints, d, of the map intervals. To estimate male and female map distances for each interval, a novel method is proposed to evaluate the most parsimonious trend of Q(d) along the chromosome, where Q(d) is expressed as a polynomial in d. Stepwise maximum likelihood polynomial regression of Q is described. The procedure has been implemented in a FORTRAN program package, TREND, and is applied to data on chromosome 18. 11 refs., 2 figs., 3 tabs.

  19. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  20. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  1. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics

    PubMed Central

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-sung; Maki, Kara L.; Ross, David S.; Aquavella, James V.; Rolland, Jannick P.

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/− 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision. PMID:24156045

  2. A maximum likelihood analysis of the CoGeNT public dataset

    NASA Astrophysics Data System (ADS)

    Kelso, Chris

    2016-06-01

    The CoGeNT detector, located in the Soudan Underground Laboratory in Northern Minnesota, consists of a 475 grams (fiducial mass of 330 grams) target mass of p-type point contact germanium detector that measures the ionization charge created by nuclear recoils. This detector has searched for recoils created by dark matter since December of 2009. We analyze the public dataset from the CoGeNT experiment to search for evidence of dark matter interactions with the detector. We perform an unbinned maximum likelihood fit to the data and compare the significance of different WIMP hypotheses relative to each other and the null hypothesis of no WIMP interactions. This work presents the current status of the analysis.

  3. Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    NASA Astrophysics Data System (ADS)

    LIU, Liang; WEI, Ping; LIAO, Hong Shu

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  4. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    PubMed Central

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-01-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature. PMID:25427517

  5. Maximum Likelihood Estimation of the Broken Power Law Spectral Parameters with Detector Design Applications

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The maximum likelihood procedure is developed for estimating the three spectral parameters of an assumed broken power law energy spectrum from simulated detector responses and their statistical properties investigated. The estimation procedure is then generalized for application to real cosmic-ray data. To illustrate the procedure and its utility, analytical methods were developed in conjunction with a Monte Carlo simulation to explore the combination of the expected cosmic-ray environment with a generic space-based detector and its planned life cycle, allowing us to explore various detector features and their subsequent influence on estimating the spectral parameters. This study permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  6. A New Maximum-likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales

    NASA Astrophysics Data System (ADS)

    Ahlers, M.; BenZvi, S. Y.; Desiati, P.; Díaz–Vélez, J. C.; Fiorino, D. W.; Westerhoff, S.

    2016-05-01

    The arrival directions of TeV–PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic-ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic-ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic-ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.

  7. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  8. CodonPhyML: Fast Maximum Likelihood Phylogeny Estimation under Codon Substitution Models

    PubMed Central

    Gil, Manuel; Zoller, Stefan; Anisimova, Maria

    2013-01-01

    Markov models of codon substitution naturally incorporate the structure of the genetic code and the selection intensity at the protein level, providing a more realistic representation of protein-coding sequences compared with nucleotide or amino acid models. Thus, for protein-coding genes, phylogenetic inference is expected to be more accurate under codon models. So far, phylogeny reconstruction under codon models has been elusive due to computational difficulties of dealing with high dimension matrices. Here, we present a fast maximum likelihood (ML) package for phylogenetic inference, CodonPhyML offering hundreds of different codon models, the largest variety to date, for phylogeny inference by ML. CodonPhyML is tested on simulated and real data and is shown to offer excellent speed and convergence properties. In addition, CodonPhyML includes most recent fast methods for estimating phylogenetic branch supports and provides an integral framework for models selection, including amino acid and DNA models. PMID:23436912

  9. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    SciTech Connect

    Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.

    2004-05-25

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS.

  10. Evaluation of Maximum Likelihood Ensemble Filter for Real-Time Assimilation of Streamflow Data in Operational Streamflow Forecasting

    NASA Astrophysics Data System (ADS)

    Rafieei Nasab, A.; Seo, D.; LEE, H.; Kim, S.

    2012-12-01

    Various data assimilation (DA) methods have been used and are being explored for use in operational streamflow forecasting. For ensemble forecasting, Ensemble Kalman filter (EnKF) is an appealing candidate for familiarity and relative simplicity. EnKF, however, is optimal only if the observation equation is linear. As such, without an iterative approach, EnKF may not be appropriate for assimilating streamflow data into soil moisture accounting models. Maximum likelihood ensemble filter (MLEF), on the other hand, is not subject to the above limitation. Also, as an ensemble extension of variational assimilation (VAR), MLEF offers a strong connection with the traditional single-valued forecast process through the control, or the maximum likelihood, solution. In this work, we apply MLEF to the Sacramento (SAC) soil moisture accounting model and unit hydrograph (UH) for assimilation of streamflow, precipitation and potential evaporation (PE) data. A comparison between VAR and the control run of MLEF is made to verify the performance of MLEF, including that of the gradient approximation which does not require adjoint code. Sensitivity analysis is then performed to assess the performance of MLEF with respect to the ensemble size, the number of streamflow observations assimilated in each cycle, the statistical parameters for observation errors in streamflow, precipitation and PE, and for model error associated with the runoff from SAC. We also identify the science issues and challenges toward operationalization.